WideLinears
Pytorch parallel Neural Networks
A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-based systems such as evolutionary algorithms.
Installation
WideLinear is avaliable through Pypi
pip install widelinears
Pytorch Modules
WideLinear
Represents a family of parallel Linear layers that share the same input and output sizes
Parameters
- beings (int): Number of parallel Linear layers
- input_size (int): Size of input of each linear layer
- output_size (int): Size of output of each linear layer
Input Tensor Shapes
- (input_size,) will clone this input and give it to each Linear in the module, outputs (beings, output_size)
- (beings, input_size) will give each Linear its own input vector, outputs (beings, output_size)
- (batch, beings, input_size) will give each Linear its own batch of inputs, outputs (batch, beings, output_size)
Methods
- forward (Tensor): Returns output for input tensors as explained above
- clone_being (source, destination): Clones linear layer from one position to other, overriding what was there
- get_single (position): Get LinearWidePointer class that is a pointer to this module but behaves as a normal single nn.Linear
- to_linears (): Returns list of instances of nn.Linear with the same parameters as each Linear ins this module
WideDeep
WideDeep generalizes Deep Neural Networks using WideLinear layers, and simplifies constructing parallel Deep Neural Networks. Behaves as a group of separate Deep Neural Networks that run in parallel for good time efficiency.
Parameters
- beings (int): Number of parallel Deep NNs
- input_size (int): Size of input of each Deep NN
- hidden_size (int): Size of each hidden layer in each Deep NN
- depth (int): Number of hidden layers (if 1, there is a single Linear layer from input to output)
- output_size (int): Size of output of each Deep NN
- non_linear (optional function): Non Linearity function at each intermediate step (defaults to ReLU)
- final_nl (optional function): Non Linearity at output (defaults to sigmoid)
Input Tensor Shapes
- (input_size,) will clone this input and give it to each Deep NN, outputs (beings, output_size)
- (beings, input_size) will give each Deep NN its own input vector, outputs (beings, output_size)
- (batch, beings, input_size) will give each Deep NN its own batch of inputs, outputs (batch, beings, output_size)
Methods
- forward (Tensor): Returns output for input tensors as explained above
- clone_being (source, destination): Clones Deep NN from one position to other, overriding what was there
Example architecture for parameters:
- beings = 4
- input_size = 5
- hidden_size = 3
- depth = 3
- output_size = 4
License
MIT
Made by João Figueira