Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

Overview

No-Transaction Band Network:
A Neural Network Architecture for Efficient Deep Hedging

Open In Colab

Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

Hedging and pricing financial derivatives while taking into account transaction costs is a tough task. Since the hedging optimization is computationally expensive or even inaccessible, risk premiums of derivatives are often overpriced. This problem prevents the liquid offering of financial derivatives.

Our proposal, "No-Transaction Band Network", enables precise hedging with much fewer simulations. This improvement leads to the offering of cheaper risk premiums and thus liquidizes the derivative market. We believe that our proposal brings the data-driven derivative business via "Deep Hedging" much closer to practical applications.

Summary

  • Deep Hedging is a deep learning-based framework to hedge financial derivatives.
  • However, a hedging strategy is hard to train due to the action dependence, i.e., an appropriate hedging action at the next step depends on the current action.
  • We propose a "No-Transaction Band Network" to overcome this issue.
  • This network circumvents the action-dependence and facilitates quick and precise hedging.

Motivation and Result

Hedging financial derivatives (exotic options in particular) in the presence of transaction cost is a hard task.

In the absence of transaction cost, the perfect hedge is accessible based on the Black-Scholes model. The real market, in contrast, always involves transaction cost and thereby makes hedging optimization much more challenging. Since the analytic formulas (such as the Black-Scholes formula of European option) are no longer available in such a market, human traders may hedge and then price derivatives based on their experiences.

Deep Hedging is a ground-breaking framework to automate and optimize such operations. In this framework, a neural network is trained to hedge derivatives so that it minimizes a proper risk measure. However, training in deep hedging suffers difficulty of action dependence since an appropriate action at the next step depends on the current action.

So, we propose "No-Transaction Band Network" for efficient deep hedging. This architecture circumvents the complication to facilitate quick training and better hedging.

loss_lookback

The learning histories above demonstrate that the no-transaction band network can be trained much quicker than the ordinary feed-forward network (See our paper for details).

price_lookback

The figure above plots the derivative price (technically derivative price spreads, which are prices subtracted by that without transaction cost) as a function of the transaction cost. The no-transaction-band network attains cheaper prices than the ordinary network and an approximate analytic formula.

Proposed Architecture: No-Transaction Band Network

The following figures show the schematic diagrams of the neural network which was originally proposed in Deep Hedging (left) and the no-transaction band network (right).

nn

  • The original network:
    • The input of the neural network uses the current hedge ratio (δ_ti) as well as other information (I_ti).
    • Since the input includes the current action δ_ti, this network suffers the complication of action-dependence.
  • The no-transaction band network:
    • This architecture computes "no-transaction band" [b_l, b_u] by a neural network and then gets the next hedge ratio by clamping the current hedge ratio inside this band.
    • Since the input of the neural network does not use the current action, this architecture can circumvent the action-dependence and facilitate training.

Give it a Try!

Open In Colab

You can try out the efficacy of No-Transaction Band Network on a Jupyter Notebook: main.ipynb.

As you can see there, the no-transaction-band can be implemented by simply adding one special layer to an arbitrary neural network.

A comprehensive library for Deep Hedging, pfhedge, is available on PyPI.

References

  • Shota Imaki, Kentaro Imajo, Katsuya Ito, Kentaro Minami and Kei Nakagawa, "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging". arXiv:2103.01775 [q-fin.CP].
  • 今木翔太, 今城健太郎, 伊藤克哉, 南賢太郎, 中川慧, "効率的な Deep Hedging のためのニューラルネットワーク構造", 人工知能学 金融情報学研究会(SIG-FIN)第 26 回研究会.
  • Hans Bühler, Lukas Gonon, Josef Teichmann and Ben Wood, "Deep hedging". Quantitative Finance, 2019, 19, 1271–1291. arXiv:1609.05213 [q-fin.CP].
You might also like...
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Experiments with differentiable stacks and queues in PyTorch

Please use stacknn-core instead! StackNN This project implements differentiable stacks and queues in PyTorch. The data structures are implemented in s

Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST
Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Random Erasing Data Augmentation =============================================================== black white random This code has the source code for

Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Experiments and examples converting Transformers to ONNX

Experiments and examples converting Transformers to ONNX This repository containes experiments and examples on converting different Transformers to ON

Flow is a computational framework for deep RL and control experiments for traffic microsimulation.
Flow is a computational framework for deep RL and control experiments for traffic microsimulation.

Flow Flow is a computational framework for deep RL and control experiments for traffic microsimulation. See our website for more information on the ap

PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Comments
  • Create Jupyter Notebook

    Create Jupyter Notebook

    • main.pyipynb-py-convert できる形に編集し、markdown による簡潔な説明を追加する。
    • main.pymain.ipynb に変換する。
    • main.ipynb の内容が正しいことを確認したら、~最後に main.py を削除し~、squash and merge する。
    opened by simaki 2
  • FIX: Fix 403 error and enable executing `main.ipynb` on Google Colab

    FIX: Fix 403 error and enable executing `main.ipynb` on Google Colab

    • Fix 403 error of "Open in Colab" buttons in README.
    • Add curl-ing utils.py so that main.ipynb can import utils.
    • Prompt a user to enable GPU on Google Colab, if not enabled.
    opened by simaki 0
  • Zero Transaction Cost is less than Black Scholes Price

    Zero Transaction Cost is less than Black Scholes Price

    Hi

    The paper shows that for cost = 0 the price for all the methods should be close to 0.022101, however we expect this price should be higher than the Black Scholes Price (given continuous hedging). But for european call option with strike =1.0, Maturity = 30/365 & volatility = 0.2 the BS price is 0.02287 which is higher than all 3 prices. I also ran the main.ipynb file with cost = 0.0 and it shows premium as 0.022101. Is this a bug or am I missing something here?

    Cost , No-transaction-band, Feed-forward Asymptotic 0.000000, 0:022101+/-0:000004 , 0:022101+/- 0:000004, 0:022101+/- 0:000004

    opened by ap2881 0
Owner
null
PyTorch implementation of the supervised learning experiments from the paper Model-Agnostic Meta-Learning (MAML)

pytorch-maml This is a PyTorch implementation of the supervised learning experiments from the paper Model-Agnostic Meta-Learning (MAML): https://arxiv

Kate Rakelly 516 Jan 5, 2023
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022
A minimal implementation of face-detection models using flask, gunicorn, nginx, docker, and docker-compose

Face-Detection-flask-gunicorn-nginx-docker This is a simple implementation of dockerized face-detection restful-API implemented with flask, Nginx, and

Pooya-Mohammadi 30 Dec 17, 2022
Minimal PyTorch implementation of YOLOv3

A minimal PyTorch implementation of YOLOv3, with support for training, inference and evaluation.

Erik Linder-Norén 6.9k Dec 29, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022
Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.

PAWS-TF ?? Implementation of Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples (PAWS)

Sayak Paul 43 Jan 8, 2023
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

NeRF Minimal Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Result of Tiny-NeRF RGB Depth

Soumik Rakshit 11 Jul 24, 2022
Yolov5-lite - Minimal PyTorch implementation of YOLOv5

Yolov5-Lite: Minimal YOLOv5 + Deep Sort Overview This repo is a shortened versio

Kadir Nar 57 Nov 28, 2022
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

Face Recognition: Too Bias, or Not Too Bias? Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. "Face recognition:

Joseph P. Robinson 41 Dec 12, 2022