PyTorch-based framework for Deep Hedging

Overview

PFHedge: Deep Hedging in PyTorch

python pypi CI codecov downloads code style: black

PFHedge is a PyTorch-based framework for Deep Hedging.

What is Deep Hedging?

Deep Hedging is a deep learning-based framework to hedge financial derivatives.

Hedging financial derivatives in the presence of market frictions (e.g., transaction cost) is a challenging task. In the absence of market frictions, the perfect hedge is accessible based on the Black-Scholes model. The real market, in contrast, always involves frictions and thereby makes hedging optimization much more challenging. Since the analytic formulas (such as the Black-Scholes formula) are no longer available in such a market, it may be necessary to adjust model-based Greeks to hedge and price derivatives based on experiences.

Deep Hedging is a ground-breaking framework to optimize such hedging operations. In this framework, a neural network is trained to hedge derivatives so that it minimizes a proper risk measure. By virtue of the high representability of a neural network and modern optimization algorithms, one can expect to achieve the optimal hedge by training a neural network. Indeed, the experiments in Bühler et al. 18 and Imaki et al. 21 show high feasibility and scalability of Deep Hedging algorithms for options under transaction costs.

Global investment banks are looking to rethink the Greeks-based hedging with Deep Hedging and slash considerable amount of hedging costs. This could be the "game-changer" in the trillion-dollar industry of derivatives.

PFHedge enables you to experience this revolutionary framework on your own. You can try, tweak, and delve into Deep Hedging algorithms using PyTorch. We hope PFHedge accelerates the research and development of Deep Hedging.

Features

Imperative Experiences

  • PFHedge is designed to be intuitive and imperative to streamline your research on Deep Hedging.
  • You can quickly build a Hedger and then fit and price derivatives right away.
  • You can easily tweak your model, risk measure, derivative, optimizer, and other setups on the fly.

Seamless Integration with PyTorch

  • PFHedge is built to be deeply integrated into PyTorch.
  • Your Deep-Hedger can be built as a Module and trained by any Optimizer.
  • You can use GPUs to boost your hedging optimization (See below).

Effortless Extensions

  • You can build new hedging models, derivatives, and features with little glue code.
  • You can build new hedging models by just subclassing Module.
  • You can quickly try out your own stochastic processes, derivatives, and input features.

Batteries Included

Install

pip install pfhedge

How to Use

Open In Colab

Prepare a Derivative to Hedge

Financial instruments are provided in pfhedge.instruments and classified into two types:

  • Primary instruments: A primary instrument is a basic financial instrument that is traded on a market, and therefore their prices are accessible as the market prices. Examples include stocks, bonds, commodities, and currencies.
  • Derivative instruments: A derivative is a financial instrument whose payoff is contingent on a primary instrument. An (over-the-counter) derivative is not traded on the market, and therefore the price is not directly accessible. Examples include EuropeanOption, LookbackOption, VarianceSwap, and so forth.

We consider a BrownianStock, which is a stock following the geometric Brownian motion, and a EuropeanOption which is contingent on it. We assume that the stock has a transaction cost of 1 basis point.

from pfhedge.instruments import BrownianStock
from pfhedge.instruments import EuropeanOption

stock = BrownianStock(cost=1e-4)
derivative = EuropeanOption(stock)

derivative
# EuropeanOption(
#   strike=1., maturity=0.0800
#   (underlier): BrownianStock(sigma=0.2000, cost=1.0000e-04, dt=0.0040)
# )

Create Your Hedger

A Hedger in Deep Hedging is basically characterized by three elements:

We here use a multi-layer perceptron as our model.

from pfhedge.nn import Hedger
from pfhedge.nn import MultiLayerPerceptron

model = MultiLayerPerceptron()
hedger = Hedger(model, inputs=["log_moneyness", "expiry_time", "volatility", "prev_hedge"])

The hedger is also a Module.

hedger
# Hedger(
#   inputs=['log_moneyness', 'expiry_time', 'volatility', 'prev_hedge']
#   (model): MultiLayerPerceptron(
#     (0): LazyLinear(in_features=0, out_features=32, bias=True)
#     (1): ReLU()
#     (2): Linear(in_features=32, out_features=32, bias=True)
#     (3): ReLU()
#     (4): Linear(in_features=32, out_features=32, bias=True)
#     (5): ReLU()
#     (6): Linear(in_features=32, out_features=32, bias=True)
#     (7): ReLU()
#     (8): Linear(in_features=32, out_features=1, bias=True)
#     (9): Identity()
#   )
#   (criterion): EntropicRiskMeasure()
# )

Fit and Price

Now we train our hedger so that it minimizes the risk measure through hedging.

The hedger is trained as follows. In each epoch, we generate Monte Carlo paths of the asset prices and let the hedger hedge the derivative by trading the stock. The hedger's risk measure (EntropicRiskMeasure() in our case) is computed from the resulting profit and loss distribution, and the parameters in the model are updated.

hedger.fit(derivative, n_epochs=200)

Once we have trained the hedger, we can evaluate the derivative price as utility indifference price (For details, see Deep Hedging and references therein).

price = hedger.price(derivative)

More Examples

Use GPU

To employ the desired device and/or dtype in fitting and pricing, use to method.

dtype = torch.float64
device = torch.device("cuda:0")

derivative = EuropeanOption(BrownianStock()).to(dtype, device)
hedger = Hedger(...).to(dtype, device)

Black-Scholes' Delta-Hedging Strategy

In this strategy, a hedger incessantly rebalances their portfolio and keeps it delta-neutral. The hedge-ratio at each time step is given by the Black-Scholes' delta.

This strategy is the optimal one in the absence of cost. On the other hand, this strategy transacts too frequently and consumes too much transaction cost.

from pfhedge.nn import BlackScholes
from pfhedge.nn import Hedger

derivative = EuropeanOption(BrownianStock(cost=1e-4))

model = BlackScholes(derivative)
hedger = Hedger(model, inputs=model.inputs())

Whalley-Wilmott's Asymptotically Optimal Strategy for Small Costs

This strategy is proposed by Whalley et al. 1997 and is proved to be optimal for asymptotically small transaction costs.

In this strategy, a hedger always maintains their hedge ratio in the range (called no-transaction band) while they never transact inside this range. This strategy is supposed to be optimal in the limit of small transaction costs, while suboptimal for large transaction costs.

from pfhedge.nn import Hedger
from pfhedge.nn import WhalleyWilmott

derivative = EuropeanOption(BrownianStock(cost=1e-3))

model = WhalleyWilmott(derivative)
hedger = Hedger(model, inputs=model.inputs())

Your Own Module

You can employ any Module you build as a hedging model. The input/output shapes is (N, H_in) -> (N, 1), where N is the number of Monte Carlo paths of assets and H_in is the number of input features.

Here we show an example of No-Transaction Band Network, which is proposed in Imaki et al. 21.

import torch.nn.functional as fn
from torch.nn import Module
from pfhedge.nn import BlackScholes
from pfhedge.nn import Clamp
from pfhedge.nn import MultiLayerPerceptron


class NoTransactionBandNet(Module):
    def __init__(self, derivative):
        super().__init__()

        self.delta = BlackScholes(derivative)
        self.mlp = MultiLayerPerceptron(out_features=2)
        self.clamp = Clamp()

    def inputs(self):
        return self.delta.inputs() + ["prev_hedge"]

    def forward(self, input: Tensor) -> Tensor:
        prev_hedge = input[:, [-1]]

        delta = self.delta(input[:, :-1]).reshape(-1, 1)
        width = self.mlp(input[:, :-1])

        min = delta - fn.leaky_relu(width[:, [0]])
        max = delta + fn.leaky_relu(width[:, [1]])

        return self.clamp(prev_hedge, min=min, max=max)


model = NoTransactionBandNet()
hedger = Hedger(model, inputs=model.inputs())

Autogreek

A module pfhedge.autogreek provides functions implementing automatic evaluation of greeks using automatic differentiation.

import pfhedge.autogreek as autogreek
from pfhedge.instruments import BrownianStock
from pfhedge.instruments import EuropeanOption
from pfhedge.nn import Hedger
from pfhedge.nn import WhalleyWilmott

derivative = EuropeanOption(BrownianStock(cost=1e-4))

model = WhalleyWilmott(derivative)
hedger = Hedger(model, inputs=model.inputs())

def pricer(spot):
    return hedger.price(derivative, init_state=(spot,), enable_grad=True)

delta = autogreek.delta(pricer, spot=torch.tensor(1.0))
# tensor(0.5092)
gamma = autogreek.gamma(pricer, spot=torch.tensor(1.0))
# tensor(0.0885)

Contribution

Any contributions to PFHedge are more than welcome!

  • GitHub Issues: Bug reports, feature requests, and questions.
  • Pull Requests: Bug-fixes, feature implementations, and documentation updates.

Please take a look at CONTRIBUTING.md before creating a pull request.

This project is owned by Preferred Networks and maintained by Shota Imaki.

References

Comments
  • 'nan' problem in d1 and BlackScholes.price

    'nan' problem in d1 and BlackScholes.price

    This hardly happens but I faced this issue:

    import torch
    from pfhedge.instruments import EuropeanOption
    from pfhedge.instruments import BrownianStock
    derivative = EuropeanOption(BrownianStock(), maturity=5/250)
    from pfhedge.nn import BlackScholes
    
    BlackScholes(derivative).price(log_moneyness=torch.tensor(1), time_to_maturity=torch.tensor(0), volatility=torch.tensor(0.2))
    # works normally: tensor(1.7183)
    BlackScholes(derivative).price(log_moneyness=torch.tensor(0), time_to_maturity=torch.tensor(0), volatility=torch.tensor(0.2))
    # error
    
    BlackScholes(derivative).price(log_moneyness=torch.tensor(1e-7), time_to_maturity=torch.tensor(0), volatility=torch.tensor(0.2))
    # tensor(1.1921e-07)
    BlackScholes(derivative).price(log_moneyness=torch.tensor(-1e-7), time_to_maturity=torch.tensor(0), volatility=torch.tensor(0.2))
    # tensor(0.)
    

    Considering the continuity of pricing, the error case should return tensor of 0.

    This issue comes from the problem in black-sholes d1 calculation:

    https://github.com/pfnet-research/pfhedge/blob/c342b52ba2d8c6182304a20a0db21e83130799ef/pfhedge/nn/modules/bs/european.py#L222 https://github.com/pfnet-research/pfhedge/blob/c342b52ba2d8c6182304a20a0db21e83130799ef/pfhedge/nn/functional.py#L521 -> (0.0).div(0.0) returns nan Then, https://github.com/pfnet-research/pfhedge/blob/c342b52ba2d8c6182304a20a0db21e83130799ef/pfhedge/nn/functional.py#L473 https://github.com/pytorch/pytorch/blob/71f889c7d265b9636b93ede9d651c0a9c4bee191/torch/distributions/normal.py#L79-L82 in self._validate_sample(value), https://github.com/pytorch/pytorch/blob/71f889c7d265b9636b93ede9d651c0a9c4bee191/torch/distributions/distribution.py#L286-L294 these validations returns ValueError because of nan.

    I think,

    BlackScholes(derivative).price(log_moneyness=torch.tensor(0), time_to_maturity=torch.tensor(0), volatility=torch.tensor(0.2))
    

    should return tensor of 0, but, I'm not sure if we should fix the calculation in d1 or make an exceptional routine for this edge case.

    opened by masanorihirano 13
  • BUG: Invalid Price time series from Heston Process

    BUG: Invalid Price time series from Heston Process

    The current implementation of the Heston process can make price time series including non-positive prices because of the discrete approximation.

    https://github.com/pfnet-research/pfhedge/blob/main/pfhedge/stochastic/heston.py#L103

    For example, we can face this issue with this code:

    import torch
    from pfhedge.instruments import BrownianStock
    from pfhedge.instruments import HestonStock
    from pfhedge.instruments import EuropeanOption
    from pfhedge.nn import Hedger, MultiLayerPerceptron
    torch.autograd.set_detect_anomaly(True)
    
    
    def main():
        torch.manual_seed(2)
        # stock = BrownianStock(cost=1e-4, volatility=0.2)
        stock = HestonStock()
        derivative = EuropeanOption(stock)
    
        model = MultiLayerPerceptron()
        hedger = Hedger(
            model, inputs=["log_moneyness", "expiry_time", "prev_hedge"])
    
        hedger.fit(derivative, n_epochs=200)
    
        price = hedger.price(derivative)
    
        print(price)
    
    
    if __name__ == '__main__':
        main()
    
    

    In this code,

    1. Heston process generates no-positive value
    2. no-positive spot price cause the log moneyness of -inf
    3. MLP returns nan because of the invalid input
    4. backpropagation work incorrectly

    Current implementation use the discreate approximation of the following equations:

    However, according to the following calculation, the first equation can be in a different form.

    This transformation enables us to implement the Heston process geometrically and avoid the current issue.

    bug 
    opened by masanorihirano 11
  • Additional Features feature

    Additional Features feature

    This is my suggestion for additional implementation. But, I haven't implement test codes for them.

    Please consider implementing these suggestions to pfhedge.

    FeatureClassList is also related to Hedger in my opinion. Currently, Hedger has FeatureList in its variable. https://github.com/pfnet-research/pfhedge/blob/7c31debe2a944a136bdf4c871fd4a1d2ec9a231e/pfhedge/nn/modules/hedger.py#L195

    On the other hand, derivative is given from outside at compute_hedge method. https://github.com/pfnet-research/pfhedge/blob/7c31debe2a944a136bdf4c871fd4a1d2ec9a231e/pfhedge/nn/modules/hedger.py#L249-L251

    In my opinion, derivative should be only available in compute_hedge method, but, due to the following line, the derivative is broadcasted among all places in Hedger through FeatureList.of. https://github.com/pfnet-research/pfhedge/blob/7c31debe2a944a136bdf4c871fd4a1d2ec9a231e/pfhedge/nn/modules/hedger.py#L287

    compute_hedge is a process requiring quite big computational resources, especially when the simulation path is huge. Thus, spriting these paths into a small number of the mini badges as separate derivatives and their underlings and putting them into compute_hedge method parallelry is one solution. Of course, we should also handle prev_output differently to realize the parallelization. However, at least, broadcasting of derivative is harmful in my opinion.

    Thus, I think, in Hedger, self.inputs should be FeatureClassList and, at the beginning of compute_hedge, FeatureList should be generated only for the inside of the methods like this:

    inputs = self.inputs(derivative, self)
    

    According to this change, get_input method also should be changed.

    Please consider the abovementioned changes.

    opened by masanorihirano 8
  • Heston volatility should be placed as a parameter of Heston class

    Heston volatility should be placed as a parameter of Heston class

    Currently, Heston class has variance but doesn't have volatility as a buffer. However, in terms of calculating delta or others, volatility should be registered.

    opened by masanorihirano 7
  • ENH: implicit args in blackscholes

    ENH: implicit args in blackscholes

    Currently, BlackScholes modules work like:

    from pfhedge.instruments import BrownianStock
    from pfhedge.instruments import EuropeanOption
    from pfhedge.nn import BlackScholes
    
    derivative = EuropeanOption(BrownianStock())
    pricer = lambda derivative: BlackScholes(derivative).price(
                 log_moneyness=derivative.log_moneyness(),
                 time_to_maturity=derivative.time_to_maturity(),
                 volatility=derivative.ul().volatility)
    derivative = EuropeanOption(BrownianStock(), maturity=5/250)
    derivative.list(pricer, cost=1e-4)
    derivative.simulate(n_paths=2)
    derivative.spot
    

    However, the part,

    pricer = lambda derivative: BlackScholes(derivative).price(
                 log_moneyness=derivative.log_moneyness(),
                 time_to_maturity=derivative.time_to_maturity(),
                 volatility=derivative.ul().volatility)
    

    seems a little bit redundant and not universal. I think it comes from the following reasons:

    1. it is obvious that each args are generated from derivatives.
    2. required args (and the number of required args) depend on underliers

    Thus, I suggest that all args are not required, and if the args are missing, these required args are calculated automatically based on registered derivatives.

    Of course, in the case of derivative using autogreeks, these implicit args cannot be applied, but, I think my suggestion is worth being considere.

    This suggestion is not limited to BlackScholes.price, but also to all other methods in greeks methods.

    enhancement black-scholes 
    opened by masanorihirano 6
  • Default parameter for init_state in heston process

    Default parameter for init_state in heston process

    https://pfnet-research.github.io/pfhedge/stochastic.html#pfhedge.stochastic.generate_heston

    I think (1.0, sigma**2) is better for init_state. Thus, how about changing the default of init_state to None and calculating them in this method.

    opened by masanorihirano 6
  • Question about cash()

    Question about cash()

    Hello,

    I would like to ask what's the idea behind the cash() of the HedgeLoss ABC? Currently, the Hedger.price() ends up producing the output adjusted for the 'loss' measure, not really the derivative price itself.

    e.g. the following example returns the invalid price of the instrument:

    torch.manual_seed(42)
    
    # Prepare a derivative to hedge
    deriv = EuropeanOption(BrownianStock(), strike = 100.0)
    
    # Create your hedger
    model = MultiLayerPerceptron()
    hedger = Hedger(model, ["log_moneyness", "expiry_time", "volatility", "prev_hedge"])
    
    # Fit and price
    hedger.fit(deriv, n_paths=10000, n_epochs=200, init_price = 100.0)
    price = hedger.price(deriv, n_paths=10000, init_price = 100.0)
    print(f"Price={price:.5e}")
    

    Can you please elaborate on this idea? I guess I am missing something

    question 
    opened by quant1729 6
  • Add DataHedger and DeepHedger

    Add DataHedger and DeepHedger

    1. Use a robust expression of entropic_risk_measure for large values.
    2. Add Data hedger using any generated data and features.
    3. Add Deep hedger using different neural network structures in different time steps.
    4. Add a notebook illustrating the use of data hedger and deep hedger
    5. Add matplotlib into dependencies
    opened by justinhou95 5
  • bugs in greeks calculation?

    bugs in greeks calculation?

    I have checked if greeks calculation is correct. But, vega seems incorrect.

    https://svc.qri.jp/jpx/nkopm/ image

    BSEuropeanOption(call=True, strike=28500).delta(log_moneyness=torch.tensor(28535/28500).log(), time_to_maturity=torch.tensor(17/252), volatility=torch.tensor(0.1817))
    # tensor(0.5198)
    BSEuropeanOption(call=True, strike=28500).gamma(log_moneyness=torch.tensor(28535/28500).log(), time_to_maturity=torch.tensor(17/252), volatility=torch.tensor(0.1817))
    # tensor(0.0003)
    BSEuropeanOption(call=True, strike=28500).vega(log_moneyness=torch.tensor(28535/28500).log(), time_to_maturity=torch.tensor(17/252), volatility=torch.tensor(0.1817))
    # tensor(2953.0981)
    
    BSEuropeanOption(call=False, strike=28500).delta(log_moneyness=torch.tensor(28535/28500).log(), time_to_maturity=torch.tensor(17/252), volatility=torch.tensor(0.1738))
    # tensor(-0.4802)
    BSEuropeanOption(call=False, strike=28500).gamma(log_moneyness=torch.tensor(28535/28500).log(), time_to_maturity=torch.tensor(17/252), volatility=torch.tensor(0.1738))
    # tensor(0.0003)
    BSEuropeanOption(call=False, strike=28500).vega(log_moneyness=torch.tensor(28535/28500).log(), time_to_maturity=torch.tensor(17/252), volatility=torch.tensor(0.1738))
    # tensor(2953.0750)
    

    only vega is calculated via the autogreek module. Of course, one solution is overriding vega method in BSEuropeanOption class. However, I think this issue is not just an issue in BSEuropeanOption class.

    Of course, I think I can implement vega method in BSEuropeanOption class as PR. But, unfortunantally, I haven't found what is the cause in autogrek module because I have never use this module.

    opened by masanorihirano 5
  • Default parameter of sigma in Heston process, CIR process

    Default parameter of sigma in Heston process, CIR process

    I think the default value of sigma in Heston class, Heston process generation, CIR process generation are too big.

    Currently, sigma was set to 2.0.

    from pfhedge.stochastic.cir import generate_cir
    generate_cir(10,100, sigma=2.0)
    generate_cir(10,100, sigma=0.2)
    

    sigma=2.0 seems too big and makes many 0 as a result. In Heston process, this means no volatility. I think sigma=0.2 seems better.

    https://doi.org/10.1007/978-3-319-05221-2_3 p.75 Fig. 3.1 seems a good reference for it.

    opened by masanorihirano 4
  • Type incompatible in time-series generation and autogreek

    Type incompatible in time-series generation and autogreek

    I think this is a very minor issue.

    The example of autogreek allowed putting torch.Tensor with autograd into init_state. https://github.com/masanorihirano/pfhedge/blob/develop/pfhedge/autogreek.py#L54-L71

    Through Hedger.price, Hedger.compute_pnl, and derivertive.simulate, the underlying asset accept the init_state, such as BrownianStock.

    In the underlying assets, init_state was handled as Optional[tuple], such as: https://github.com/masanorihirano/pfhedge/blob/develop/pfhedge/instruments/primary/brownian.py#L55-L62 https://github.com/masanorihirano/pfhedge/blob/develop/pfhedge/instruments/primary/brownian.py#L108-L116

    However, in stochastic modules, these are handled as float, such as: https://github.com/masanorihirano/pfhedge/blob/develop/pfhedge/stochastic/brownian.py#L7-L15

    Fortunately, this doesn't cause any issue now. However, this is very misleading and can cause an issue when we make our own time series. (Actually, I had.)

    init_state can accept multiple initial states and the number is not consistent. Thus, it is reasonable that these underlying assets accept it as Optional[tuple]. But, considering the case it includes some tensors with grad, it should be Optional[tuple[torch.Tensor]], and some functions in stochastic models should also accept torch.Tensor.

    But, I understand this change causes wide modifications. Thus, it has to be thought carefully even though this is a minor issue.

    opened by masanorihirano 4
  • How to extract the optimal hedging paths (post model fitting)?

    How to extract the optimal hedging paths (post model fitting)?

    Hi developers:

    I am using pfhedge library to do some simple examples, and I want to look at the optimal hedging paths after I trained the model. I used "pos=hedger.compute_hedge(derivative).squeeze(1)". Is this correct?

    Thanks

    opened by XiaoqiDong96 1
  • gpu bug in functional.py

    gpu bug in functional.py

    Hi,

    in line 506, a new tensor c is created without assigning to the same device as input tensors. c = torch.tensor(cost).unsqueeze(0).unsqueeze(-1)

    The example notebook fails when running with GPU.

    Regards

    opened by zxem 3
  • Costume Input Clarification

    Costume Input Clarification

    Hello -- It would be helpful for people that are not too familiar with PyTorch to show how to set the BaseDerivative and BasePrimary.

    ex. input = torch.tensor([[...],[...], ...]) BSEuropeanOption(input)

    opened by tjrs 0
Releases(0.20.0)
  • 0.20.0(Mar 31, 2022)

    Release/0.20.0 (#572)

    • ENH: Support PyTorch builtin loss functions for hedging loss (#568) (#569)

      • You can now use PyTorch built-in loss function modules as criterion of Hedger.

      • For instance, with MSELoss, criterion measures mean-squared error between the payoff of a contingent claim and its replicating portfolio.

      • Migration guide: If you have defined your own HedgeLoss, please modify the signatures of its methods as forward(self, input) -> forward(self, input, target=0.0) and cash(self, input) -> cash(self, input, target=0.0).

    • ENH: Suppprt multiple hedges in nn.functional.pl (#571)

    • DOC: Add examples to Black-Scholes functionals (#566)

    • MAINT: Use cast_state (#567)

    • Bumping version from 0.19.2 to 0.20.0 (#573)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.19.2(Mar 27, 2022)

    Release/0.19.2 (#565)

    • MAINT: Directly compute greeks and fix bugs (#562)

    • DOC: fix typo (#560) (#561)

    • DOC: Update example in README.md (#564)

    • Bumping version from 0.19.1 to 0.19.2 (#563)

    Co-authored-by: GitHub Actions [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.19.1(Mar 23, 2022)

    • ENH: Add autogreek.gamma_from_delta (close #397) (#552)

    • ENH: Analytical BS European binary formulas (#437) (#553)

    • ENH: Analytical BS American binary formulas (#437) (#554)

    • DOC: Add notes on analytic formulas of price and greeks (#556)

    • DOC: Fix notebook and clear outputs (close #402) (#557)

    • DOC: Fix typo in generate_local_volatility_process (#551)

    • Bumping version from 0.19.0 to 0.19.1 (#559)

    Co-authored-by: GitHub Actions [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.19.0(Mar 14, 2022)

    Release/0.19.0 (#545)

    • ENH: Add box_muller (#534)

    • ENH: Add VasicekRate (close #505) (#538)

    • ENH: Add LocalVolatilityStock (#539)

    • DOC: Add documentation of features (#541)

    • DOC: Miscellaneous updates (#537) (#547) (#542) (#543) (#548)

    • MAINT: Fix primary spot typing (#530) (#533)

    • MAINT: Add extra_repr to SVIVariance (#535)

    • MAINT: Reimplement looking ahead to multiple underliers (#536)

    • MAINT: Add OptionMixin and deprecate BaseOption (#544)

      • BaseOption is deprecated. Inherit BaseDerivative and OptionMixin instead.
    • MAINT: Bumping version from 0.18.0 to 0.19.0 (#546)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.18.0(Mar 9, 2022)

    Release/0.18.0 (#531)

    • ENH: Add pfhedge.__version__ support (#514)

    • ENH: Add Black-Scholes formulas as functional (#489) (#506)

    • ENH: Add end_index to forward start payoff functional (#518)

    • ENH: Add clauses, named_clauses to derivative (#520)

    • ENH: implicit args for Black-Scholes modules (#516)

    • ENH: Add bilinear interpolation function bilerp (close #523) (#527)

    • ENH: Add .float16(), .float32(), .float64() to Instrument (#524)

    • BUG: Stop assigning arbitrary strike to autogreek.delta (#517)

    • DOC: Update functional documentations (#508)

    • DOC: Add note on discrete/continuous monitoring (#513)

    • DOC: Add note on adding clause (#515)

    • DOC: Elaborate documentation on payoff (#519)

    • MAINT: Refactor BlackScholes module using factory (close #509) (#510)

    • MAINT: Miscellaneous refactoring (#507) (#521) (#525)

    • CHORE: Run Publish action on release (#504)

    • Bumping version from 0.17.0 to 0.18.0 (#532)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.17.0(Feb 19, 2022)

    Release/0.17.0 (#501)

    • ENH: Add is_listed (close #274) (#495)

    • ENH: Add drift to generate_brownian and BrownianStock (close #112) (#497) (#500)

    • ENH: Add Derivative.delist() (#496)

    • ENH: Add EuropeanForwardStartOption (#443) (#498)

    • BUG: Fix 0/0 issue in d1, d2, and other functions (close #484) (#494)

    Co-authored-by: GitHub Actions [email protected] Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.16.1(Feb 13, 2022)

    Release/0.16.1 (#492)

    • MAINT: Update lint, format GitHub Actions (#486)

    • MAINT: Update epsilon to finfo.tiny (#487)

    • MAINT: Refactor feature registrations using singleton (close #490) (#491)

    Source code(tar.gz)
    Source code(zip)
  • 0.16.0(Feb 8, 2022)

    Release/0.16.0 (#482)

    • ENH: Add SVI model: svi_variance and SVIVariance (#406) (#410)

    • ENH: Add Sobol quasirandom engine (#430) (#431) (#478)

    • ENH: Support BSEuropeanBinary for put (#434) (#438)

    • ENH: Enable customizing tqdm progress bar (#446)

    • ENH: Add antithetic sampling of randn (close #449) (#450)

    • DOC: Add an example of hedging variance swap using options (#426) (#435)

    • DOC: Add autogreek.theta to documentation (#429)

    • DOC: Add citation to Heston model (#447)

    • DOC: Add an example of sticky strike and sticky delta (#479)

    • TEST: Add tests for BSEuropean put (#439)

    • TEST: Add tests for identity between vega and gamma (#441)

    • MAINT: Update codecov action (#442)

    Source code(tar.gz)
    Source code(zip)
  • 0.15.0(Dec 23, 2021)

    Release/0.15.0 (#427)

    • ENH: Support vega and theta for BS modules (#424) (#412)

    • MAINT: Miscellaneous maintenance (#407) (#408)

    • MAINT: torch requirement changed to 1.9.0 (#425)

    • TEST: Add workflow to test that examples work (close #108) (#409)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.14.2(Nov 23, 2021)

    Release/0.14.2 (#404)

    • MAINT: Fix default params in CIR and Heston stochastic process (#401) (Thank you, @masanorihirano !)

    • TEST: Add and refactor tests (#400) (#394)

    • DOC: Clean documentation (#396)

    • DOC: Add example_heston_iv.py (#403)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.14.1(Nov 9, 2021)

    Release/0.14.1 (#392)

    • BUG: Fix BSAmericanBinaryOption (#366) (#391)

    • TEST: Add more tests to BS modules (#366) (#391)

    • MAINT: Rename base classes to "Base*" (close #384) (#386)

      • Instrument, Primary, Derivative are deprecated. Use Base* instead.
    • MAINT: Miscellaneous maintainances (#385) (#387) (#388) (#389)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.14.0(Oct 31, 2021)

    Release/0.14.0 (#382)

    • ENH: Add entropic_risk_measure to nn.functional (close #352) (#372)

    • ENH: Add value_at_risk to nn.functional (#371)

    • MAINT: Add typing (#378)

    • MAINT: Drop Python 3.6 (close #356) (#357)

      • Python 3.6 is no longer supported. Please update to >=3.7.
    • CHORE: Support PyTorch 1.10 (#377)

    • CHORE: Update README.md (#375)

    • CHORE: Update pytest-cov requirement from ^2.8.1 to ^3.0.0 (#367)

    • CHORE: Update Makefile (#381)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.13.2(Oct 1, 2021)

    Release/0.13.2 (#365)

    • MAINT: Fix buffer key (#353)

    • MAINT: Fix prevention of zero division (#359)

    • DOC: Miscellaneous updates (#358) (#360) (#361) (#362)

    • CHORE: Run CI and building documentation in Poetry environment (#363)

    • TEST: Fix typo in test of spot (#354)

    Co-authored-by: GitHub Actions [email protected] Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.13.1(Sep 16, 2021)

    Release/0.13.1 (#349)

    • ENH: Add features spot and underlier_spot (#345)

    • ENH: Additional Features feature (#343)

    • MAINT: Update repr of clauses (#342)

    • DOC: Minor updates (#339) (#340) (#344)

    • DOC: Fix no-transaction band class for new implementation (#346)

    • CHORE: Measure test coverage for Python 3.9 (#347)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.13.0(Sep 7, 2021)

    Release/0.13.0 (#334)

    • ENH: Add Derivative.add_clause() (close #328) (#330)

    • MAINT: Prefer feature.get over feature.__getitem__ (#324)

    • MAINT: Deprecate specifying derivative dinfo (#331)

    • MAINT: Minor refactoring (#329)

    • DOC: Minor updates (#332) (#325)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.12.3(Sep 6, 2021)

    Release/0.12.3 (#323)

    • ENH: Add ncdf, npdf, d1, d2 to functional.py (#315)

    • ENH: Add Variance feature (close #269) (#319)

    • ENH: Add autogreek.vega (close #97) (#320)

    • MAINT: Refine typing (#316) (#317)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.12.2(Sep 3, 2021)

    Release/0.12.2 (#313)

    • BUG: Fix inappropriate instance generation of features (close #311) (#312) (Thank you, @masanorihirano !)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.12.1(Sep 2, 2021)

    Release/0.12.1 (#307)

    • DOC: Minor updates of documentation (#300) (#301)

    • MAINT: Minor refactoring (#302) (#304) (#305) (#306)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.12.0(Aug 31, 2021)

    Release/0.12.0 (#297)

    • ENH: Enable calling Instrument.to(instrument) (close #279) (#285)

    • ENH: Add CIRRate (close #264) (#287)

    • MAINT: Guarantee buffer is registered with desired dtype/device (close #278) (#286)

    • MAINT: Add assert_monotone, convex, cash_invariant (close #294) (#296)

    • DOC: Update example getting started (#288)

    • DOC: Add note on params to autogreek (#293)

    • DOC: Add detailed explanation of complute_loss, fit (close #282) (#295)

    • DOC: Elaborate docs (#289) (#290) (#291) (#292)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.11.1(Aug 24, 2021)

  • 0.11.0(Aug 24, 2021)

    Release/0.11.0 (#270)

    • ENH: Enable hedging with multiple instruments (close #132) (#268)

    • MAINT: Refactor compute_pnl (#261)

    • MAINT: Correct typos and clean docs (#260)

    • MAINT: Return namedtuple in generate_heston (close #259) (#262)

    • MAINT: Use torch.clamp in clamp and leaky_clamp (close #59) (#263)

    • MAINT: Refactor Hedger.fit using _configure_optimizer (#266)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

    Source code(tar.gz)
    Source code(zip)
  • 0.10.4(Aug 23, 2021)

    Release/0.10.4 (#257)

    • ENH: Accelerate compute_pnl (#247)

    • ENH: Add feature TimeToMaturity (close #246) (#252)

    • ENH: Add FeatureList and refactor Hedger (close #248) (#256)

    • DOC: Update doc of to (#241)

    • MAINT: Remove features/functional.py (close #249) (#251)

    • MAINT: Fix format action (#254)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.10.3(Aug 20, 2021)

    Release/0.10.3 (#239)

    • BUG: Fix BS European price put (#238) (Thank you, @masanorihirano !)

    • ENH: Beautify repr of instruments using extra_repr (#237)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.10.2(Aug 20, 2021)

    Release/0.10.2 (#234)

    • API: Rename volatility to sigma in Brownian (close #210) (#231)

    • DOC: WhalleyWilmott: Add Note that backward could generate nan (close #213) (#232)

    • DOC: Add intersphinx and copybutton to docs (#233)

    • CHORE: Create GitHub Action publish.yml (#230)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.10.1(Aug 18, 2021)

    Release/0.10.1 (#229)

    • DOC: Add docs of Derivative.list() to each derivative (#226)

    • DOC: Fix typo (#225)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.10.0(Aug 18, 2021)

    Release/0.10.0 (#223)

    • ENH: Enable hedging with derivative (#133) (#219)

    • DOC: Clarify "Otherwise:" in Clamp #218

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.9.0(Aug 17, 2021)

    Release/0.9.0 (#215)

    • BUG: Fix time_to_maturity mismatch (close #211) (#214)

    • ENH: Add VarianceSwap (close #127) (#207)

    • DOC: Fix code blocks in docstrings (#199)

    • MAINT: Use disable in tqdm (#209)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Aug 16, 2021)

    Release/0.8.1 (#205)

    • ENH: Add volatility to HestonStock (#201)

    • Change default value of sigma in Heston and CIR (close #203) (#204) (Thank you, @masanorihirano !)

    • DOC: Document buffers of primary instruments (#200)

    Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected] Release/0.8.1 (#205)

    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Aug 15, 2021)

    • API: Unify to init_state (close #189) (#193)

    • API: Add Derivative.ul() as an alias to underlier (close #183) (#195)

    • MAINT: Introduce and tame mypy (close #190) (#191)

    • MAINT: Add type check to CI (close #190) (#192)

    • MAINT: Refactor generate_cir (#194)

    Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
  • 0.7.5(Aug 13, 2021)

    • BUG: Fix invalid time series of CIR process (close #182) (#186) (Thank you, @masanorihirano !)

    • DOC: Add missing dt to generate_cir (#186)

    • DOC: Update an example in README.md (#181)

    Co-authored-by: GitHub Actions [email protected] Co-authored-by: Masanori HIRANO [email protected] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: GitHub Actions [email protected]

    Source code(tar.gz)
    Source code(zip)
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022
PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ([email protected])}, howpubl

Donny You 40 Sep 14, 2022
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 9, 2023
A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation

A 3D multi-modal medical image segmentation library in PyTorch We strongly believe in open and reproducible deep learning research. Our goal is to imp

Adaloglou Nikolas 1.2k Dec 27, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries and layers can then be written using Ivy, with simultaneous support for all frameworks. Ivy currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy. Check out the docs for more info!

Ivy 8.2k Jan 2, 2023
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

null 80 Dec 27, 2022
Curvlearn, a Tensorflow based non-Euclidean deep learning framework.

English | 简体中文 Why Non-Euclidean Geometry Considering these simple graph structures shown below. Nodes with same color has 2-hop distance whereas 1-ho

Alibaba 123 Dec 12, 2022
A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering.

DeepFilterNet A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) based on Deep Filtering. libDF contains Rust code used for dat

Hendrik Schröter 292 Dec 25, 2022
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

Jittor: a Just-in-time(JIT) deep learning framework Quickstart | Install | Tutorial | Chinese Jittor is a high-performance deep learning framework bas

null 2.7k Jan 3, 2023
PyTorch Personal Trainer: My framework for deep learning experiments

Alex's PyTorch Personal Trainer (ptpt) (name subject to change) This repository contains my personal lightweight framework for deep learning projects

Alex McKinney 8 Jul 14, 2022
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Asaf 3 Dec 27, 2022
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-wise Distributed Data based on Pytorch Framework

VFedPCA+VFedAKPCA This is the official source code for the Paper: Vertical Federated Principal Component Analysis and Its Kernel Extension on Feature-

John 9 Sep 18, 2022
A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning.

Open3DSOT A general python framework for single object tracking in LiDAR point clouds, based on PyTorch Lightning. The official code release of BAT an

Kangel Zenn 172 Dec 23, 2022