Alternatives to Deep Neural Networks for Function Approximations in Finance

Overview

Alternatives to Deep Neural Networks for Function Approximations in Finance

Code companion repo

Overview

This is a repository of Python code to go with our paper whose details could be found below

We provide our implementations of the generalized stochastic sampling (gSS) and functional Tensor Train (fTT) algorithms from the paper, and related routines. This is a somewhat simplified version of the code that produced the test results that we reported. Simplifications were made to improve clarity and increase general didactic value, at a (small) expense of cutting out some of the secondary tricks and variations.

The code is released under the MIT License

Installing the code

You do not have to install this package once you have downloaded it -- see the next section on how to use it without any installation. But if you want to call our routines from a different project or directory, execute the following (note you need to run this from altnnpub directory, assuming this is the root of the project directory -- the directory where this file that you are reading is located)

altnnpub>pip install -e .

Then you can call various methods from your code like this

from nnu import gss_kernels
kernel = gss_kernels.global_kernel_dict(1.0)['invquad']
...

to uninstall the package, run (from anywhere)

blah>pip uninstall altnnpub

Running the code

The main entry point to the code is main.py in ./nnu folder. Assuming the project directory is called altnnpub, the code is run via Python module syntax

altnnpub>python -m nnu.main

Various options such as which functions to fit, which models to use, and so on can be set in main.py

Results are reported in the terminal and are also stored in ./results directory

All of our (non-test) Python code is in ./nnu directory

Jupyter notebooks

We provide a number of notebooks that demonstrate, at varying levels of detail, how to build and use certain models

  • ftt_als_01.ipynb: Functional Tensor Train (fTT) approximation using the Alternating Least Squares (ALS) algorithm
  • functional_2D_low_rank_01.ipynb: Low-rank functional approximation of 2D functions done manually. This is an illustrative example of ALS applied to calculate successive rank-1 approximations, as described in the paper
  • gss_example_keras_direct_01.ipynb: Create and test the generalized Stochastic Sampling (gSS) model. In this notebook do it "by hand", ie using granular interfaces such as the Keras functional interface. Here we create a hidim version of the model with the Adam optimizer for the frequency bounds (aka scales) and linear regression for the outer (linear) weights
  • gss_example_model_factory_01.ipynb: Create and test the generalized Stochastic Sampling (gSS) model. This notebook uses gss_model_factory and other higher-level interfaces that the main entry point (./nnu/main.py) eventually calls. We create a onedim version of the model with a one-dim optimizer for the frequency bounds (aka scales) and linear regression for the outer (linear) weights

Test suite

Unit tests are collected in ./test directory and provide useful examples of how different parts of the code can be used. The test suite can be run in the standard Python way using pytest, e.g. from the comamnd line at the project root directory:

altnnpub>pytest

Pytest is installed with pip install pytest command

Individual tests can be run using a pytest -k test_blah type command, which could be useful for debugging. This is all very well explained in pytest documentation

Tests are there predominantly to show how to call certain functions. They mostly test that the code simply runs rather than testing numbers, etc. except for tests in test_gss_report_generator.py where actual fitting results are compared to the expected ones. Tests produce various output that could be interesting to see -- option pytest -s will print out whatever the tests are printing out

Requirements

The code has been tested with Python 3.7 and 3.8. See requirements.txt for required packages

Contacting us

Our contact details are in the SSRN link below

Details of the paper

Antonov, Alexandre and Piterbarg, Vladimir, Alternatives to Deep Neural Networks for Function Approximations in Finance (November 7, 2021). Available at SSRN: https://ssrn.com/abstract=3958331 or http://dx.doi.org/10.2139/ssrn.3958331

You might also like...
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

Grow Function: Generate 3D Stacked Bifurcating Double Deep Cellular Automata based organisms which differentiate using a Genetic Algorithm...
Grow Function: Generate 3D Stacked Bifurcating Double Deep Cellular Automata based organisms which differentiate using a Genetic Algorithm...

Grow Function: A 3D Stacked Bifurcating Double Deep Cellular Automata which differentiates using a Genetic Algorithm... TLDR;High Def Trees that you can mint as NFTs on Solana

Code for
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Neural-fractal - Create Fractals Using Complex-Valued Neural Networks!

Neural Fractal Create Fractals Using Complex-Valued Neural Networks! Home Page Features Define Dynamical Systems Using Complex-Valued Neural Networks

A flexible framework of neural networks for deep learning
A flexible framework of neural networks for deep learning

Chainer: A deep learning framework Website | Docs | Install Guide | Tutorials (ja) | Examples (Official, External) | Concepts | ChainerX Forum (en, ja

Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

State of the Art Neural Networks for Deep Learning

pyradox This python library helps you with implementing various state of the art neural networks in a totally customizable fashion using Tensorflow 2

A flexible framework of neural networks for deep learning
A flexible framework of neural networks for deep learning

Chainer: A deep learning framework Website | Docs | Install Guide | Tutorials (ja) | Examples (Official, External) | Concepts | ChainerX Forum (en, ja

Owner
null
EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling This is the official implementation for "Frustratingly Simple Pretraining Al

Atsuki Yamaguchi 31 Nov 18, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 40 Dec 27, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
Reinforcement Learning for finance

Reinforcement Learning for Finance We apply reinforcement learning for stock trading. Fetch Data Example import utils # fetch symbols from yahoo fina

Tomoaki Fujii 159 Jan 3, 2023
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 4, 2023
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks

What is DeepHyper? DeepHyper is a software package that uses learning, optimization, and parallel computing to automate the design and development of

DeepHyper Team 214 Jan 8, 2023