Synapses
A plug-and-play library for neural networks written in Python!
# run
pip install synapses-py==7.4.1
# in the directory of your project
Neural Network
Create a neural network
Import Synapses
, call NeuralNetwork.init
and provide the size of each layer.
from synapses_py import NeuralNetwork, ActivationFunction, DataPreprocessor, Statistics
layers = [4, 6, 5, 3]
neuralNetwork = NeuralNetwork.init(layers)
neuralNetwork
has 4 layers. The first layer has 4 input nodes and the last layer has 3 output nodes. There are 2 hidden layers with 6 and 5 neurons respectively.
Get a prediction
inputValues = [1.0, 0.5625, 0.511111, 0.47619]
prediction = \
NeuralNetwork.prediction(neuralNetwork, inputValues)
prediction
should be something like [ 0.8296, 0.6996, 0.4541 ]
.
Note that the lengths of inputValues and prediction equal to the sizes of input and output layers respectively.
Fit network
learningRate = 0.5
expectedOutput = [0.0, 1.0, 0.0]
fitNetwork = \
NeuralNetwork.fit(
neuralNetwork,
learningRate,
inputValues,
expectedOutput
)
fitNetwork
is a new neural network trained with a single observation.
To train a neural network, you should fit with multiple datapoints
Create a customized neural network
The activation function of the neurons created with NeuralNetwork.init
, is a sigmoid one. If you want to customize the activation functions and the weight distribution, call NeuralNetwork.customizedInit
.
def activationF(layerIndex):
if layerIndex == 0:
return ActivationFunction.sigmoid
elif layerIndex == 1:
return ActivationFunction.identity
elif layerIndex == 2:
return ActivationFunction.leakyReLU
else:
return ActivationFunction.tanh
def weightInitF(_layerIndex):
return 1.0 - 2.0 * random()
customizedNetwork = \
NeuralNetwork.customizedInit(
layers,
activationF,
weightInitF
)
Visualization
Call NeuralNetwork.toSvg
to take a brief look at its svg drawing.
The color of each neuron depends on its activation function while the transparency of the synapses depends on their weight.
svg = NeuralNetwork.toSvg(customizedNetwork)
Save and load a neural network
JSON instances are compatible across platforms! We can generate, train and save a neural network in Python and then load and make predictions in Javascript!
toJson
Call NeuralNetwork.toJson
on a neural network and get a string representation of it. Use it as you like. Save json
in the file system or insert into a database table.
json = NeuralNetwork.toJson(customizedNetwork)
ofJson
loadedNetwork = NeuralNetwork.ofJson(json)
As the name suggests, NeuralNetwork.ofJson
turns a json string into a neural network.
Encoding and decoding
One hot encoding is a process that turns discrete attributes into a list of 0.0 and 1.0. Minmax normalization scales continuous attributes into values between 0.0 and 1.0. You can use DataPreprocessor
for datapoint encoding and decoding.
The first parameter of DataPreprocessor.init
is a list of tuples (attributeName, discreteOrNot).
setosaDatapoint = {
"petal_length": "1.5",
"petal_width": "0.1",
"sepal_length": "4.9",
"sepal_width": "3.1",
"species": "setosa"
}
versicolorDatapoint = {
"petal_length": "3.8",
"petal_width": "1.1",
"sepal_length": "5.5",
"sepal_width": "2.4",
"species": "versicolor"
}
virginicaDatapoint = {
"petal_length": "6.0",
"petal_width": "2.2",
"sepal_length": "5.0",
"sepal_width": "1.5",
"species": "virginica"
}
datasetList = [ setosaDatapoint,
versicolorDatapoint,
virginicaDatapoint ]
dataPreprocessor = \
DataPreprocessor.init(
[ ("petal_length", False),
("petal_width", False),
("sepal_length", False),
("sepal_width", False),
("species", True) ],
iter(datasetList)
)
encodedDatapoints = map(lambda x:
DataPreprocessor.encodedDatapoint(dataPreprocessor, x),
datasetList
)
encodedDatapoints
equals to:
[ [ 0.0 , 0.0 , 0.0 , 1.0 , 0.0, 0.0, 1.0 ],
[ 0.511111, 0.476190, 1.0 , 0.562500, 0.0, 1.0, 0.0 ],
[ 1.0 , 1.0 , 0.166667, 0.0 , 1.0, 0.0, 0.0 ] ]
Save and load the preprocessor by calling DataPreprocessor.toJson
and DataPreprocessor.ofJson
.
Evaluation
To evaluate a neural network, you can call Statistics.rootMeanSquareError
and provide the expected and predicted values.
expectedWithOutputValuesList = \
[ ( [ 0.0, 0.0, 1.0], [ 0.0, 0.0, 1.0] ),
( [ 0.0, 0.0, 1.0], [ 0.0, 1.0, 1.0] ) ]
expectedWithOutputValuesIter = \
iter(expectedWithOutputValuesList)
rmse = Statistics.rootMeanSquareError(
expectedWithOutputValuesIter
)