PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

Overview

PyGAD: Genetic Algorithm in Python

PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine learning algorithms. It supports Keras and PyTorch.

Check documentation of the PyGAD.

Downloads Docs

PYGAD-LOGO

PyGAD supports different types of crossover, mutation, and parent selection. PyGAD allows different types of problems to be optimized using the genetic algorithm by customizing the fitness function.

The library is under active development and more features are added regularly. If you want a feature to be supported, please check the Contact Us section to send a request.

Donation

You can donate via Open Collective: opencollective.com/pygad.

To donate using PayPal, use either this link: paypal.me/ahmedfgad or the e-mail address [email protected].

Installation

To install PyGAD, simply use pip to download and install the library from PyPI (Python Package Index). The library lives a PyPI at this page https://pypi.org/project/pygad.

Install PyGAD with the following command:

pip install pygad

PyGAD is developed in Python 3.7.3 and depends on NumPy for creating and manipulating arrays and Matplotlib for creating figures. The exact NumPy version used in developing PyGAD is 1.16.4. For Matplotlib, the version is 3.1.0.

To get started with PyGAD, please read the documentation at Read The Docs https://pygad.readthedocs.io.

PyGAD Source Code

The source code of the PyGAD' modules is found in the following GitHub projects:

The documentation of PyGAD is available at Read The Docs https://pygad.readthedocs.io.

PyGAD Documentation

The documentation of the PyGAD library is available at Read The Docs at this link: https://pygad.readthedocs.io. It discusses the modules supported by PyGAD, all its classes, methods, attribute, and functions. For each module, a number of examples are given.

If there is an issue using PyGAD, feel free to post at issue in this GitHub repository https://github.com/ahmedfgad/GeneticAlgorithmPython or by sending an e-mail to [email protected].

If you built a project that uses PyGAD, then please drop an e-mail to [email protected] with the following information so that your project is included in the documentation.

  • Project title
  • Brief description
  • Preferably, a link that directs the readers to your project

Please check the Contact Us section for more contact details.

Life Cycle of PyGAD

The next figure lists the different stages in the lifecycle of an instance of the pygad.GA class. Note that PyGAD stops when either all generations are completed or when the function passed to the on_generation parameter returns the string stop.

PyGAD Lifecycle

The next code implements all the callback functions to trace the execution of the genetic algorithm. Each callback function prints its name.

import pygad
import numpy

function_inputs = [4,-2,3.5,5,-11,-4.7]
desired_output = 44

def fitness_func(solution, solution_idx):
    output = numpy.sum(solution*function_inputs)
    fitness = 1.0 / (numpy.abs(output - desired_output) + 0.000001)
    return fitness

fitness_function = fitness_func

def on_start(ga_instance):
    print("on_start()")

def on_fitness(ga_instance, population_fitness):
    print("on_fitness()")

def on_parents(ga_instance, selected_parents):
    print("on_parents()")

def on_crossover(ga_instance, offspring_crossover):
    print("on_crossover()")

def on_mutation(ga_instance, offspring_mutation):
    print("on_mutation()")

def on_generation(ga_instance):
    print("on_generation()")

def on_stop(ga_instance, last_population_fitness):
    print("on_stop()")

ga_instance = pygad.GA(num_generations=3,
                       num_parents_mating=5,
                       fitness_func=fitness_function,
                       sol_per_pop=10,
                       num_genes=len(function_inputs),
                       on_start=on_start,
                       on_fitness=on_fitness,
                       on_parents=on_parents,
                       on_crossover=on_crossover,
                       on_mutation=on_mutation,
                       on_generation=on_generation,
                       on_stop=on_stop)

ga_instance.run()

Based on the used 3 generations as assigned to the num_generations argument, here is the output.

on_start()

on_fitness()
on_parents()
on_crossover()
on_mutation()
on_generation()

on_fitness()
on_parents()
on_crossover()
on_mutation()
on_generation()

on_fitness()
on_parents()
on_crossover()
on_mutation()
on_generation()

on_stop()

Example

Check the PyGAD's documentation for information about the implementation of this example.

import pygad
import numpy

"""
Given the following function:
    y = f(w1:w6) = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 + 6wx6
    where (x1,x2,x3,x4,x5,x6)=(4,-2,3.5,5,-11,-4.7) and y=44
What are the best values for the 6 weights (w1 to w6)? We are going to use the genetic algorithm to optimize this function.
"""

function_inputs = [4,-2,3.5,5,-11,-4.7] # Function inputs.
desired_output = 44 # Function output.

def fitness_func(solution, solution_idx):
    # Calculating the fitness value of each solution in the current population.
    # The fitness function calulates the sum of products between each input and its corresponding weight.
    output = numpy.sum(solution*function_inputs)
    fitness = 1.0 / numpy.abs(output - desired_output)
    return fitness

fitness_function = fitness_func

num_generations = 100 # Number of generations.
num_parents_mating = 7 # Number of solutions to be selected as parents in the mating pool.

# To prepare the initial population, there are 2 ways:
# 1) Prepare it yourself and pass it to the initial_population parameter. This way is useful when the user wants to start the genetic algorithm with a custom initial population.
# 2) Assign valid integer values to the sol_per_pop and num_genes parameters. If the initial_population parameter exists, then the sol_per_pop and num_genes parameters are useless.
sol_per_pop = 50 # Number of solutions in the population.
num_genes = len(function_inputs)

last_fitness = 0
def callback_generation(ga_instance):
    global last_fitness
    print("Generation = {generation}".format(generation=ga_instance.generations_completed))
    print("Fitness    = {fitness}".format(fitness=ga_instance.best_solution()[1]))
    print("Change     = {change}".format(change=ga_instance.best_solution()[1] - last_fitness))
    last_fitness = ga_instance.best_solution()[1]

# Creating an instance of the GA class inside the ga module. Some parameters are initialized within the constructor.
ga_instance = pygad.GA(num_generations=num_generations,
                       num_parents_mating=num_parents_mating, 
                       fitness_func=fitness_function,
                       sol_per_pop=sol_per_pop, 
                       num_genes=num_genes,
                       on_generation=callback_generation)

# Running the GA to optimize the parameters of the function.
ga_instance.run()

# After the generations complete, some plots are showed that summarize the how the outputs/fitenss values evolve over generations.
ga_instance.plot_fitness()

# Returning the details of the best solution.
solution, solution_fitness, solution_idx = ga_instance.best_solution()
print("Parameters of the best solution : {solution}".format(solution=solution))
print("Fitness value of the best solution = {solution_fitness}".format(solution_fitness=solution_fitness))
print("Index of the best solution : {solution_idx}".format(solution_idx=solution_idx))

prediction = numpy.sum(numpy.array(function_inputs)*solution)
print("Predicted output based on the best solution : {prediction}".format(prediction=prediction))

if ga_instance.best_solution_generation != -1:
    print("Best fitness value reached after {best_solution_generation} generations.".format(best_solution_generation=ga_instance.best_solution_generation))

# Saving the GA instance.
filename = 'genetic' # The filename to which the instance is saved. The name is without extension.
ga_instance.save(filename=filename)

# Loading the saved GA instance.
loaded_ga_instance = pygad.load(filename=filename)
loaded_ga_instance.plot_fitness()

For More Information

There are different resources that can be used to get started with the genetic algorithm and building it in Python.

Tutorial: Implementing Genetic Algorithm in Python

To start with coding the genetic algorithm, you can check the tutorial titled Genetic Algorithm Implementation in Python available at these links:

This tutorial is prepared based on a previous version of the project but it still a good resource to start with coding the genetic algorithm.

Genetic Algorithm Implementation in Python

Tutorial: Introduction to Genetic Algorithm

Get started with the genetic algorithm by reading the tutorial titled Introduction to Optimization with Genetic Algorithm which is available at these links:

Introduction to Genetic Algorithm

Tutorial: Build Neural Networks in Python

Read about building neural networks in Python through the tutorial titled Artificial Neural Network Implementation using NumPy and Classification of the Fruits360 Image Dataset available at these links:

Building Neural Networks Python

Tutorial: Optimize Neural Networks with Genetic Algorithm

Read about training neural networks using the genetic algorithm through the tutorial titled Artificial Neural Networks Optimization using Genetic Algorithm with Python available at these links:

Training Neural Networks using Genetic Algorithm Python

Tutorial: Building CNN in Python

To start with coding the genetic algorithm, you can check the tutorial titled Building Convolutional Neural Network using NumPy from Scratch available at these links:

This tutorial) is prepared based on a previous version of the project but it still a good resource to start with coding CNNs.

Building CNN in Python

Tutorial: Derivation of CNN from FCNN

Get started with the genetic algorithm by reading the tutorial titled Derivation of Convolutional Neural Network from Fully Connected Network Step-By-Step which is available at these links:

Derivation of CNN from FCNN

Book: Practical Computer Vision Applications Using Deep Learning with CNNs

You can also check my book cited as Ahmed Fawzy Gad 'Practical Computer Vision Applications Using Deep Learning with CNNs'. Dec. 2018, Apress, 978-1-4842-4167-7 which discusses neural networks, convolutional neural networks, deep learning, genetic algorithm, and more.

Find the book at these links:

Fig04

Citing PyGAD - Bibtex Formatted Citation

If you used PyGAD, please consider adding a citation to the following paper about PyGAD:

@misc{gad2021pygad,
      title={PyGAD: An Intuitive Genetic Algorithm Python Library}, 
      author={Ahmed Fawzy Gad},
      year={2021},
      eprint={2106.06158},
      archivePrefix={arXiv},
      primaryClass={cs.NE}
}

Contact Us

Comments
  • Inconsistency between codes in tutorial and the codes in github

    Inconsistency between codes in tutorial and the codes in github

    Hi there,

    Thanks so much for your code, and it looks clear logically. I followed your tutorial in https://towardsdatascience.com/genetic-algorithm-implementation-in-python-5ab67bb124a6, and I imported your packages from GitHub to run the example exactly; However, I ran into errors.

    After inspecting, it seems the codes you provided in the tutorial is inconsistent with your code in GitHub. For example, your function cal_pop_fitness() has two input positional argument equation_inputs, pop, in your tutorial, but there is no positional argument in your function under the GitHub module.

    Will you be able to confirm with my confusion if possible, and it would be highly appreciated if you could help me with making the example get running.

    Best regards,

    -Yili

    question 
    opened by Yili-Zhang 13
  • Update to pygad.py to use multiprocessing of generations

    Update to pygad.py to use multiprocessing of generations

    Makes use of concurrent.futures MultiprocessPoolExecutor to go through the generations faster.

    NOTE: if using multiprocessing, the fitness_func must return the fitness score, as well as the solution_idx passed into it!

    This was tested using the example PyGAD script given in the tutorial, but with a slight modification to the fitness function as described above, as well as the 2 new parameters for the ga_instance. This was the script used for testing:

    import pygad
    import numpy
    
    """
    import pygad
    import numpy
    
    """
    Given the following function:
        y = f(w1:w6) = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 + 6wx6
        where (x1,x2,x3,x4,x5,x6)=(4,-2,3.5,5,-11,-4.7) and y=44
    What are the best values for the 6 weights (w1 to w6)? We are going to use the genetic algorithm to optimize this function.
    """
    
    function_inputs = [4,-2,3.5,5,-11,-4.7] # Function inputs.
    desired_output = 44 # Function output.
    
    def fitness_func(solution, solution_idx):
        # Calculating the fitness value of each solution in the current population.
        # The fitness function calulates the sum of products between each input and its corresponding weight.
        output = numpy.sum(solution*function_inputs)
        fitness = 1.0 / numpy.abs(output - desired_output)
        return fitness, solution_idx
    
    fitness_function = fitness_func
    
    num_generations = 100 # Number of generations.
    num_parents_mating = 7 # Number of solutions to be selected as parents in the mating pool.
    
    # To prepare the initial population, there are 2 ways:
    # 1) Prepare it yourself and pass it to the initial_population parameter. This way is useful when the user wants to start the genetic algorithm with a custom initial population.
    # 2) Assign valid integer values to the sol_per_pop and num_genes parameters. If the initial_population parameter exists, then the sol_per_pop and num_genes parameters are useless.
    sol_per_pop = 50 # Number of solutions in the population.
    num_genes = len(function_inputs)
    
    last_fitness = 0
    def callback_generation(ga_instance):
        global last_fitness
        print("Generation = {generation}".format(generation=ga_instance.generations_completed))
        print("Fitness    = {fitness}".format(fitness=ga_instance.best_solution()[1]))
        print("Change     = {change}".format(change=ga_instance.best_solution()[1] - last_fitness))
        last_fitness = ga_instance.best_solution()[1]
    
    # Creating an instance of the GA class inside the ga module. Some parameters are initialized within the constructor.
    ga_instance = pygad.GA(num_generations=num_generations,
                           num_parents_mating=num_parents_mating, 
                           fitness_func=fitness_function,
                           sol_per_pop=sol_per_pop, 
                           num_genes=num_genes,
                           on_generation=callback_generation,
                           use_multiprocess=True,
                           max_workers=5)
    
    # Running the GA to optimize the parameters of the function.
    ga_instance.run()
    
    # After the generations complete, some plots are showed that summarize the how the outputs/fitenss values evolve over generations.
    ga_instance.plot_fitness()
    
    # Returning the details of the best solution.
    solution, solution_fitness, solution_idx = ga_instance.best_solution()
    print("Parameters of the best solution : {solution}".format(solution=solution))
    print("Fitness value of the best solution = {solution_fitness}".format(solution_fitness=solution_fitness))
    print("Index of the best solution : {solution_idx}".format(solution_idx=solution_idx))
    
    prediction = numpy.sum(numpy.array(function_inputs)*solution)
    print("Predicted output based on the best solution : {prediction}".format(prediction=prediction))
    
    if ga_instance.best_solution_generation != -1:
        print("Best fitness value reached after {best_solution_generation} generations.".format(best_solution_generation=ga_instance.best_solution_generation))
    
    # Saving the GA instance.
    filename = 'genetic' # The filename to which the instance is saved. The name is without extension.
    ga_instance.save(filename=filename)
    
    # Loading the saved GA instance.
    loaded_ga_instance = pygad.load(filename=filename)
    loaded_ga_instance.plot_fitness()
    

    Makes my generations go by much faster :D

    Now, because I'm using -numpy.inf's in the pygad script, this might not report proper fitness numbers generation to generation. I'm simply using numpy.inf's as really small fitness numbers so that as each member returns a fitness score, it'll increase, however it might still report as -inf which isn't desirable. Maybe a different number can be used/assumed? Or some other logic, either way :)

    enhancement 
    opened by windowshopr 9
  • Advice for GA implementation in my project

    Advice for GA implementation in my project

    Hey!

    I am working on some code with open cv to track the percentage of frames in which my face is detected in terms of "attention span". I intend to optimize this attention span with an evolutionary solver that iterates a combination of diffused and direct light levels in my environment with an arduino board. Owing to the long range of time over which a generation of light levels need to be iterated I am unsure as to the best Evolutionary algorithm library or approach i can use to do this. I am currently not connected to an arduino and am just trying to do this with dummy numbers (1-4,1-4) for diffused and direct respectively. The idea is that every 15 mins or so ( lets say 50,000 frames) the light levels assume a new and improved combination and wait for another 15.

    I am new to python and coding in general so I am finding it harder to figure out which library and it's ideal implementation for this purpose I'd be grateful if you have any advice.

    question 
    opened by NihitBorpujari 8
  • [FEATURE] Add Multiprocess Capabilities! :)

    [FEATURE] Add Multiprocess Capabilities! :)

    I know in the documentation, or on an article I read (can't remember which) it said that PyGAD didn't perform well enough in multiprocessing to warrant adding it as a feature, however I have a GREAT need for it with a lot of my fitness functions that I create using PyGAD. Would be awesome to see it get implemented as another feature before running a GA search.

    I envision something like adding a parameter use_multiprocessing = True, and num_workers = multiprocessing.cpu_count(), and if those are enabled, start a process pool for each chromosome in the current population, so each population item gets its own worker. When the generation is done, the pool is closed, and then when the next generation starts, the pool fires up again for the new population. Pseudo-code would look something like:

    import concurrent.futures
    
    if use_multiprocessing == True:
        with concurrent.futures.ProcessPoolExecutor(max_workers=num_workers) as executor:
            results = [executor.submit(fitness_func, solution, solution_idx) for solution, solution_idx in current_population]
            for f in concurrent.futures.as_completed(results):
                ind_solution_result = f.result() #[0]
                # Logic for what to do with the individual solution stuff here
            executor.shutdown(wait=True)
    else:
        #...the rest of the default PyGAD behaviour
    

    ...I recognize this COULD be a big undertaking, but doing it this way would allow the current population of chromosomes/generation to be gone through much quicker than having to wait for a linear progression when more cpu cores are available.

    You COULD also create several ga_instance's to run simultaneously yes, but I think being able to get through the generations themselves quicker is a better idea.

    Would love to see this get implemented as I love PyGAD and don't really want to switch to DEAP as PyGAD is much easier to control/use IMO.

    opened by windowshopr 7
  • ga_instance.continue() after pygad.load()

    ga_instance.continue() after pygad.load()

    Suggestion:

    I created a GA with pygad where the fitness function needs a lot of time to calculate (several minutes per fitness calculation). When I am manually stopping it by hitting Ctrl+C I let the code continue and save the current state with ga_instance.save("file"). When I start another run I load the last state with ga_instance.load("file") and call ga_instance.run() to continue with the current population. It would be neat to be able to continue the calculations without loosing ga_instance.solutions etc., so when I call plot_fitness() my old data is still there. Currently with ga_instance.run() everything gets reset to [].

    Sample Code:

    try:
      ga_instance = pygad.load("file")
      ga_instance.continue()
    except FileNotFoundError:
      ga_instance = pygad.GA([...])
      ga_instance.run()
    
    ga_instance.save("file")
    ga_instance.plot_fitness()
    
    enhancement 
    opened by FeBe95 6
  • For some reason fitness never exceeds 1.0

    For some reason fitness never exceeds 1.0

    I use pygad to train my neural network. The code below is a test of pygad. And it worked. After I wrote simple NN implementation and tried to train it by pygad. But for some reason, fitness never exceeds 1.0. First I thought that my code doesn't work properly. But I again run my first test of pygad(the code below) and it has the same issue.

    `

    import math
    import pygad
    
    
    def calculate_neuron(input, weight, nonlinear=None, bias=False):
        """
        Calculate value of neuron.
    
        :param input: Input for neuron
        :param weight: Weight for each input
        :param nonlinear: Nonlinear function for neuron. If == None then neuron is linear
        :param bias: If true bias exist in previous layer
        :return: value of neuron
        """
    
        value = 0
        for i in range(len(input)):
            value += input[i] * weight[i]
    
        if bias:
            value += 1 * weight[len(weight) - 1]
    
        if nonlinear is not None:
            value = nonlinear(value)
    
        return value
    
    
    def sigmoid(x):
        return math.exp(x) / (math.exp(x) + 1)
    
    
    def xor_neural_network(input, weight):
        """
        This is neural network that must implement xor function. (I didn't read about objects yet)
    
        :param input: Input for neural network. For this is 2
        :param weight: Weight for neural. Length is 9
        :return:
        """
    
        hid1 = calculate_neuron(input, weight[:3], sigmoid, True)
        hid2 = calculate_neuron(input, weight[3:6], sigmoid, True)
    
        output = calculate_neuron([hid1, hid2], weight[6:9], sigmoid, bias=True)
        return output
    
    
    function_inputs = [[0, 0],
                       [0, 1],
                       [1, 0],
                       [1, 1]]
    
    des_outputs = [0, 1, 1, 0]
    
    
    def fitness_func(solution):
        outputs = []
        for input in function_inputs:
            outputs.append(xor_neural_network(input, solution))
    
        error = 0
        for output, des_output in zip(outputs, des_outputs):
            error += abs(output - des_output)
    
        fitness = 1 / error
        return fitness
    
    
    if __name__ == "__main__":
        num_generations = 1000
        sol_per_pop = 800
        num_parents_mating = 4
    
        mutation_percent_genes = 10
    
        parent_selection_type = "sss"
    
        crossover_type = "single_point"
    
        mutation_type = "random"
    
        keep_parents = 1
    
        num_genes = 9
    
        ga_instance = pygad.GA(num_generations=num_generations,
                               sol_per_pop=sol_per_pop,
                               num_parents_mating=num_parents_mating,
                               num_genes=num_genes,
                               fitness_func=fitness_func,
                               mutation_percent_genes=mutation_percent_genes,
                               parent_selection_type=parent_selection_type,
                               crossover_type=crossover_type,
                               mutation_type=mutation_type,
                               keep_parents=keep_parents,
                               )
    
        while True:
            ga_instance.run()
            print(ga_instance.best_solution())
            print(xor_neural_network(function_inputs[0], ga_instance.best_solution()[0]))
            print(xor_neural_network(function_inputs[1], ga_instance.best_solution()[0]))
            print(xor_neural_network(function_inputs[2], ga_instance.best_solution()[0]))
            print(xor_neural_network(function_inputs[3], ga_instance.best_solution()[0]))
            ga_instance.plot_result()
    

    `

    question 
    opened by CheshireCat26 6
  • 'mutation_type = None' not allowed

    'mutation_type = None' not allowed

    I did not use mutation, pygad print the following warning:

    If you do not want to mutate any gene, please set mutation_type=None.
    

    But when I set mutation_type=None, or mutation_type="None", then pygad crashed:

      File "C:\My\MyPythonProject\GeneticAlgo\venv\lib\site-packages\pygad\pygad.py", line 282, in __init__
        raise TypeError("The expected type of the 'mutation_type' parameter is str but ({mutation_type}) found.".format(mutation_type=type(mutation_type)))
    TypeError: The expected type of the 'mutation_type' parameter is str but (<class 'NoneType'>) found.
    

    I looked at the source code, it seems that line 280 does not allow the possibility of mutation_type being None. Yet line 295 does allow such possibility.

    bug 
    opened by mathlusiverse 5
  • Equation inputs

    Equation inputs

    Hi Ahmed,

    I'm using your code to solve a problem in my project. However, when I change the equation_inputs from [4,-2,3.5,5,-11,-4.7] to my datafile, that is a .dat with 2000 rows and 1 column, I get this error: "ValueError: operands could not be broadcast together with shapes (5,3) (3,2000)"

    I'm using only three files, that's the reason of 3 in (3,2000).

    I've partially solved my problem, if I use the max(datafile), but I don't find the best solution in all cases, because I'm improving only one point, instead of 2000. You can see the fitness evolution in this case, for example, in the attached figure. The fitness function is the Chi-square of one point. graph1

    Could you, please, give some insight about this particular problem?

    question 
    opened by willstarplan 5
  • Project doesn't state a license

    Project doesn't state a license

    The README.md describes this project as 'open-source' but I can't find an actual license anywhere. Please choose and specify the license under which this code is released as open-source, so that potential users and contributors know what they are permitted to do with it and under what conditions.

    opened by kittentronic 4
  • Keep Parents Issue

    Keep Parents Issue

    Hello, first of all I would like to say you did an excellent job with the pygad project.

    Secondly, I would like to address an issue I am having with the keep_parents parameter. I was running an algorithm yesterday and it was working fine, but some changes were made to the code apparently, and now everytime I assign a value to that parameter an error pops up. It is because some part of the code related to it has a variable as a tuple, but the atribute .shape is called upon, which is only supported for numpy arrays.

    The error is the following: AttributeError: 'tuple' object has no attribute 'shape'

    And it happens on the line 1202 of the pygad.py code.

    1200 elif (self.keep_parents > 0): 1201 parents_to_keep = self.steady_state_selection(self.last_generation_fitness, num_parents=self.keep_parents) -> 1202 self.population[0:parents_to_keep.shape[0], :] = parents_to_keep 1203 self.population[parents_to_keep.shape[0]:, :] = self.last_generation_offspring_mutation 1204

    Is there a different way to implement this parameter now? Or if there is not is there a posibility that you would mind fixing the code?

    Thanks for your time and work on the pygad project.

    bug 
    opened by edgardohb 4
  • Rank versus Steady State Parent Selection

    Rank versus Steady State Parent Selection

    It seems that the two parent selection technique are exactly the same. Rank parent selection is however meant to be more of an explorative parent selection approach where every chromosome/solution is assigned a selection probability with respect to its rank (which is based on its fitness). This is meant to decouple the selection probability from the population fitness distribution in order to avoid selection exploitation from very strong solutions.

    Rank selection is essentially the same as Roulette Wheel Selection, but instead of weighting each solutions selection probability by its fitness, the weighting should be done the rank of the solutions compared to the other solutions.

    question 
    opened by DSKritzinger 4
  • PyGAD on GPU

    PyGAD on GPU

    Salam Ahmed. Can I implement the PyGAD framework on Nvidia GPU instead of CPU using RAPIDA, numba and CUDA? If not, do you know of a genetic algorithm library in Python that can be implemented on GPU? Thank you.

    opened by IMG-5 0
  • Bug module 'numpy' has no attribute 'int'. Did you mean: 'inf'?

    Bug module 'numpy' has no attribute 'int'. Did you mean: 'inf'?

    Traceback (most recent call last): File "C:\Users\HP\Desktop\Programs\Python\Testing\test.py", line 1, in import pygad File "C:\Users\HP.virtualenvs\Testing-x9UQs_eI\lib\site-packages\pygad_init_.py", line 1, in from .pygad import * # Relative import. File "C:\Users\HP.virtualenvs\Testing-x9UQs_eI\lib\site-packages\pygad\pygad.py", line 9, in class GA: File "C:\Users\HP.virtualenvs\Testing-x9UQs_eI\lib\site-packages\pygad\pygad.py", line 11, in GA supported_int_types = [int, numpy.int, numpy.int8, numpy.int16, numpy.int32, numpy.int64, numpy.uint, numpy.uint8, numpy.uint16, numpy.uint32, numpy.uint64] File "C:\Users\HP.virtualenvs\Testing-x9UQs_eI\lib\site-packages\numpy_init_.py", line 284, in getattr raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'int'. Did you mean: 'inf'?

    opened by JavaProgswing 5
  • Problems with multithreading and generation step

    Problems with multithreading and generation step

    Hi,

    I realy appreciate your works on PyGAD!

    I'm using it to make some chaotic learning with thousands of model, and a greedy fitness function. the parallelization is realy efficient in my case.

    I have found some problems with multithreading using keras models.

    To reproduce the problem, i use this regression sample : https://pygad.readthedocs.io/en/latest/README_pygad_kerasga_ReadTheDocs.html#example-1-regression-example

    I only reduce the num_generations to 100.

    Steps to reproduce :

    I run a few times the sample,
    image image image image

    • then, i enable the parallel processing on 8 threads :

    image

    • then, run again a few times :

    image image image image

    • sometimes, i see in logs a fitness lower than the n-1 generation, example :

    image

    • I printed all solutions used in each epoch, and i saw thats solutions are most of time the same, so the parallel_processing seems to break the generation of the next population in the most of cases.

    Thanks!

    EDIT :
    In addition i tried to reproduce the same problem with this classification problem sample, Adding the multiprocessing support cause the same problem.

    opened by BenoitMiquey 0
  • keras.ga - How can I disable printing for every step?

    keras.ga - How can I disable printing for every step?

    I am using the kerasga module and it keeps printing for every step like so:

    1/1 [==============================] - 0s 7ms/step
    1/1 [==============================] - 0s 8ms/step
    1/1 [==============================] - 0s 7ms/step
    

    All the google answers that I can find tell me to put verbose=0 when I call keras' model.predict. However, I am not calling model.predict, I am calling pygad.kerasga.predict(model=model, solution=solution, data=state) and this function doesn't support a verbose flag. What can I do to stop it from printing on every step? My pygad version is 2.18.1 and my python version is 3.10.6.

    opened by vader-coder 1
  • Bug in indexing. Code fails with error.

    Bug in indexing. Code fails with error.

    https://github.com/ahmedfgad/GeneticAlgorithmPython/blob/251072766d8a9f3ea03ab41f74cda2e8c20c21d0/example_custom_operators.py#L58

    Easy fix. It Should be random_gene_idx = numpy.random.choice(range(offspring.shape[1]))

    opened by why-not 0
Releases(2.18.1)
  • 2.18.1(Sep 19, 2022)

  • 2.18.0(Sep 9, 2022)

    1. Raise an exception if the sum of fitness values is zero while either roulette wheel or stochastic universal parent selection is used. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/129
    2. Initialize the value of the run_completed property to False. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/122
    3. The values of these properties are no longer reset with each call to the run() method self.best_solutions, self.best_solutions_fitness, self.solutions, self.solutions_fitness: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/123. Now, the user can have the flexibility of calling the run() method more than once while extending the data collected after each generation. Another advantage happens when the instance is loaded and the run() method is called, as the old fitness value are shown on the graph alongside with the new fitness values. Read more in this section: [Continue without Loosing Progress](https://pygad.readthedocs.io/en/latest/README_pygad_ReadTheDocs.html#continue-without-loosing-progress)
    4. Thanks [Prof. Fernando Jiménez Barrionuevo](http://webs.um.es/fernan) (Dept. of Information and Communications Engineering, University of Murcia, Murcia, Spain) for editing this [comment](https://github.com/ahmedfgad/GeneticAlgorithmPython/blob/5315bbec02777df96ce1ec665c94dece81c440f4/pygad.py#L73) in the code. https://github.com/ahmedfgad/GeneticAlgorithmPython/commit/5315bbec02777df96ce1ec665c94dece81c440f4
    5. A bug fixed when crossover_type=None.
    6. Support of elitism selection through a new parameter named keep_elitism. It defaults to 1 which means for each generation keep only the best solution in the next generation. If assigned 0, then it has no effect. Read more in this section: [Elitism Selection](https://pygad.readthedocs.io/en/latest/README_pygad_ReadTheDocs.html#elitism-selection). https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/74
    7. A new instance attribute named last_generation_elitism added to hold the elitism in the last generation.
    8. A new parameter called random_seed added to accept a seed for the random function generators. Credit to this issue https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/70 and [Prof. Fernando Jiménez Barrionuevo](http://webs.um.es/fernan). Read more in this section: [Random Seed](https://pygad.readthedocs.io/en/latest/README_pygad_ReadTheDocs.html#random-seed).
    9. Editing the pygad.TorchGA module to make sure the tensor data is moved from GPU to CPU. Thanks to Rasmus Johansson for opening this pull request: https://github.com/ahmedfgad/TorchGA/pull/2
    Source code(tar.gz)
    Source code(zip)
    pygad-2.18.0-py3-none-any.whl(55.10 KB)
    pygad-2.18.0.tar.gz(55.64 KB)
  • 2.17.0(Jul 8, 2022)

    PyGAD 2.17.0

    Release Date: 8 July 2022

    1. An issue is solved when the gene_space parameter is given a fixed value. e.g. gene_space=[range(5), 4]. The second gene's value is static (4) which causes an exception.
    2. Fixed the issue where the allow_duplicate_genes parameter did not work when mutation is disabled (i.e. mutation_type=None). This is by checking for duplicates after crossover directly. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/39
    3. Solve an issue in the tournament_selection() method as the indices of the selected parents were incorrect. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/89
    4. Reuse the fitness values of the previously explored solutions rather than recalculating them. This feature only works if save_solutions=True.
    5. Parallel processing is supported. This is by the introduction of a new parameter named parallel_processing in the constructor of the pygad.GA class. Thanks to [@windowshopr](https://github.com/windowshopr) for opening the issue [#78](https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/78) at GitHub. Check the [Parallel Processing in PyGAD](https://pygad.readthedocs.io/en/latest/README_pygad_ReadTheDocs.html#parallel-processing-in-pygad) section for more information and examples.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.17.0-py3-none-any.whl(53.93 KB)
    pygad-2.17.0.tar.gz(55.21 KB)
  • 2.16.3(Feb 3, 2022)

    Changes in PyGAD 2.16.3

    1. A new instance attribute called previous_generation_fitness added in the pygad.GA class. It holds the fitness values of one generation before the fitness values saved in the last_generation_fitness.
    2. Issue in the cal_pop_fitness() method in getting the correct indices of the previous parents. This is solved by using the previous generation's fitness saved in the new attribute previous_generation_fitness to return the parents' fitness values. Thanks to Tobias Tischhauser (M.Sc. - [Mitarbeiter Institut EMS, Departement Technik, OST – Ostschweizer Fachhochschule, Switzerland](https://www.ost.ch/de/forschung-und-dienstleistungen/technik/systemtechnik/ems/team)) for detecting this bug.
    3. Validate the fitness value returned from the fitness function. An exception is raised if something is wrong. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/67
    Source code(tar.gz)
    Source code(zip)
    pygad-2.16.3-py3-none-any.whl(52.32 KB)
    pygad-2.16.3.tar.gz(53.52 KB)
  • 2.16.1(Sep 29, 2021)

    1. Reuse the fitness of previously explored solutions rather than recalculating them. This feature only works if save_solutions=True.
    2. The user can use the tqdm library to show a progress bar. https://github.com/ahmedfgad/GeneticAlgorithmPython/discussions/50
    import pygad
    import numpy
    import tqdm
    
    equation_inputs = [4,-2,3.5]
    desired_output = 44
    
    def fitness_func(solution, solution_idx):
        output = numpy.sum(solution * equation_inputs)
        fitness = 1.0 / (numpy.abs(output - desired_output) + 0.000001)
        return fitness
    
    num_generations = 10000
    with tqdm.tqdm(total=num_generations) as pbar:
        ga_instance = pygad.GA(num_generations=num_generations,
                               sol_per_pop=5,
                               num_parents_mating=2,
                               num_genes=len(equation_inputs),
                               fitness_func=fitness_func,
                               on_generation=lambda _: pbar.update(1))
        
        ga_instance.run()
    
    ga_instance.plot_result()
    
    1. Solved the issue of unequal length between the solutions and solutions_fitness when the save_solutions parameter is set to True. Now, the fitness of the last population is appended to the solutions_fitness array. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/64
    2. There was an issue of getting the length of these 4 variables (solutions, solutions_fitness, best_solutions, and best_solutions_fitness) doubled after each call of the run() method. This is solved by resetting these variables at the beginning of the run() method. https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/62
    3. Bug fixes when adaptive mutation is used (mutation_type="adaptive"). https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/65
    Source code(tar.gz)
    Source code(zip)
    pygad-2.16.1-py3-none-any.whl(52.19 KB)
    pygad-2.16.1.tar.gz(53.41 KB)
  • 2.16.0(Jun 20, 2021)

    A user-defined function can be passed to the mutation_type, crossover_type, and parent_selection_type parameters in the pygad.GA class to create a custom mutation, crossover, and parent selection operators. Check the User-Defined Crossover, Mutation, and Parent Selection Operators section in the documentation: https://pygad.readthedocs.io/en/latest/README_pygad_ReadTheDocs.html#user-defined-crossover-mutation-and-parent-selection-operators The example_custom_operators.py script gives an example of building and using custom functions for the 3 operators. https://github.com/ahmedfgad/GeneticAlgorithmPython/discussions/50

    Source code(tar.gz)
    Source code(zip)
    pygad-2.16.0-py3-none-any.whl(51.73 KB)
    pygad-2.16.0.tar.gz(52.95 KB)
  • 2.15.1(Jun 18, 2021)

  • 2.15.0(Jun 18, 2021)

    1. Control the precision of all genes/individual genes. Thanks to Rainer for asking about this feature: https://github.com/ahmedfgad/GeneticAlgorithmPython/discussions/43#discussioncomment-763452
    2. A new attribute named last_generation_parents_indices holds the indices of the selected parents in the last generation.
    3. In adaptive mutation, no need to recalculate the fitness values of the parents selected in the last generation as these values can be returned based on the last_generation_fitness and last_generation_parents_indices attributes. This speeds-up the adaptive mutation.
    4. When a sublist has a value of None in the gene_space parameter (e.g. gene_space=[[1, 2, 3], [5, 6, None]]), then its value will be randomly generated for each solution rather than being generated once for all solutions. Previously, a value of None in a sublist of the gene_space parameter was identical across all solutions.
    5. The dictionary assigned to the gene_space parameter itself or one of its elements has a new key called "step" to specify the step of moving from the start to the end of the range specified by the 2 existing keys "low" and "high". An example is {"low": 0, "high": 30, "step": 2} to have only even values for the gene(s) starting from 0 to 30. For more information, check the More about the gene_space Parameter section. https://github.com/ahmedfgad/GeneticAlgorithmPython/discussions/48
    6. A new function called predict() is added in both the pygad.kerasga and pygad.torchga modules to make predictions. This makes it easier than using custom code each time a prediction is to be made.
    7. A new parameter called stop_criteria allows the user to specify one or more stop criteria to stop the evolution based on some conditions. Each criterion is passed as str which has a stop word. The current 2 supported words are reach and saturate. reach stops the run() method if the fitness value is equal to or greater than a given fitness value. An example for reach is "reach_40" which stops the evolution if the fitness is >= 40. saturate means stop the evolution if the fitness saturates for a given number of consecutive generations. An example for saturate is "saturate_7" which means stop the run() method if the fitness does not change for 7 consecutive generations. Thanks to Rainer for asking about this feature: https://github.com/ahmedfgad/GeneticAlgorithmPython/discussions/44
    8. A new bool parameter, defaults to False, named save_solutions is added to the constructor of the pygad.GA class. If True, then all solutions in each generation are appended into an attribute called solutions which is NumPy array.
    9. The plot_result() method is renamed to plot_fitness(). The users should migrate to the new name as the old name will be removed in the future.
    10. Four new optional parameters are added to the plot_fitness() function in the pygad.GA class which are font_size=14, save_dir=None, color="#3870FF", and plot_type="plot". Use font_size to change the font of the plot title and labels. save_dir accepts the directory to which the figure is saved. It defaults to None which means do not save the figure. color changes the color of the plot. plot_type changes the plot type which can be either "plot" (default), "scatter", or "bar". https://github.com/ahmedfgad/GeneticAlgorithmPython/pull/47
    11. The default value of the title parameter in the plot_fitness() method is "PyGAD - Generation vs. Fitness" rather than "PyGAD - Iteration vs. Fitness".
    12. A new method named plot_new_solution_rate() creates, shows, and returns a figure showing the rate of new/unique solutions explored in each generation. It accepts the same parameters as in the plot_fitness() method. This method only works when save_solutions=True in the pygad.GA class's constructor.
    13. A new method named plot_genes() creates, shows, and returns a figure to show how each gene changes per each generation. It accepts similar parameters like the plot_fitness() method in addition to the graph_type, fill_color, and solutions parameters. The graph_type parameter can be either "plot" (default), "boxplot", or "histogram". fill_color accepts the fill color which works when graph_type is either "boxplot" or "histogram". solutions can be either "all" or "best" to decide whether all solutions or only best solutions are used.
    14. The gene_type parameter now supports controlling the precision of float data types. For a gene, rather than assigning just the data type like float, assign a list/tuple/numpy.ndarray with 2 elements where the first one is the type and the second one is the precision. For example, [float, 2] forces a gene with a value like 0.1234 to be 0.12. For more information, check the More about the gene_type Parameter section.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.15.0-py3-none-any.whl(51.02 KB)
    pygad-2.15.0.tar.gz(52.04 KB)
  • 2.14.3(Jun 6, 2021)

  • 2.14.2(May 28, 2021)

  • 2.14.1(May 19, 2021)

    1. Issue #40 is solved. Now, the None value works with the crossover_type and mutation_type parameters: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/40
    2. The gene_type parameter supports accepting a list/tuple/numpy.ndarray of numeric data types for the genes. This helps to control the data type of each individual gene. Previously, the gene_type can be assigned only to a single data type that is applied for all genes.
    3. A new bool attribute named gene_type_single is added to the pygad.GA class. It is True when there is a single data type assigned to the gene_type parameter. When the gene_type parameter is assigned a list/tuple/numpy.ndarray, then gene_type_single is set to False.
    4. The mutation_by_replacement flag now has no effect if gene_space exists except for the genes with None values. For example, for gene_space=[None, [5, 6]] the mutation_by_replacement flag affects only the first gene which has None for its value space.
    5. When an element has a value of None in the gene_space parameter (e.g. gene_space=[None, [5, 6]]), then its value will be randomly generated for each solution rather than being generate once for all solutions. Previously, the gene with None value in gene_space is the same across all solutions
    6. Some changes in the documentation according to issue #32: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/32
    Source code(tar.gz)
    Source code(zip)
    pygad-2.14.1-py3-none-any.whl(45.86 KB)
    pygad-2.14.1.tar.gz(47.48 KB)
  • 2.13.0(Mar 13, 2021)

    PyGAD 2.13.0

    Release Date: 12 March 2021

    1. A new bool parameter called allow_duplicate_genes is supported. If True, which is the default, then a solution/chromosome may have duplicate gene values. If False, then each gene will have a unique value in its solution. Check the Prevent Duplicates in Gene Values section for more details.
    2. The last_generation_fitness is updated at the end of each generation not at the beginning. This keeps the fitness values of the most up-to-date population assigned to the last_generation_fitness parameter.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.13.0-py3-none-any.whl(44.63 KB)
    pygad-2.13.0.tar.gz(45.52 KB)
  • 2.12.0(Feb 20, 2021)

    Release Date: 20 February 2021

    1. 4 new instance attributes are added to hold temporary results after each generation: last_generation_fitness holds the fitness values of the solutions in the last generation, last_generation_parents holds the parents selected from the last generation, last_generation_offspring_crossover holds the offspring generated after applying the crossover in the last generation, and last_generation_offspring_mutation holds the offspring generated after applying the mutation in the last generation. You can access these attributes inside the on_generation() method for example.
    2. A bug fixed when the initial_population parameter is used. The bug occurred due to a mismatch between the data type of the array assigned to initial_population and the gene type in the gene_type attribute. Assuming that the array assigned to the initial_population parameter is ((1, 1), (3, 3), (5, 5), (7, 7)) which has type int. When gene_type is set to float, then the genes will not be float but casted to int because the defined array has int type. The bug is fixed by forcing the array assigned to initial_population to have the data type in the gene_type attribute. Check the issue at GitHub: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/27

    Thanks to Marios Giouvanakis, a PhD candidate in Electrical & Computer Engineer, Aristotle University of Thessaloniki (Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης), Greece, for emailing me about these issues.

    Source code(tar.gz)
    Source code(zip)
    pygad-2.12.0-py3-none-any.whl(41.54 KB)
    pygad-2.12.0.tar.gz(42.45 KB)
  • 2.11.0(Feb 16, 2021)

    PyGAD 2.11.0

    Release Date: 16 February 2021

    1. In the gene_space argument, the user can use a dictionary to specify the lower and upper limits of the gene. This dictionary must have only 2 items with keys low and high to specify the low and high limits of the gene, respectively. This way, PyGAD takes care of not exceeding the value limits of the gene. For a problem with only 2 genes, then using gene_space=[{'low': 1, 'high': 5}, {'low': 0.2, 'high': 0.81}] means the accepted values in the first gene start from 1 (inclusive) to 5 (exclusive) while the second one has values between 0.2 (inclusive) and 0.85 (exclusive). For more information, please check the Limit the Gene Value Range section of the documentation.
    2. The plot_result() method returns the figure so that the user can save it.
    3. Bug fixes in copying elements from the gene space.
    4. For a gene with a set of discrete values (more than 1 value) in the gene_space parameter like [0, 1], it was possible that the gene value may not change after mutation. That is if the current value is 0, then the randomly selected value could also be 0. Now, it is verified that the new value is changed. So, if the current value is 0, then the new value after mutation will not be 0 but 1.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.11.0-py3-none-any.whl(41.40 KB)
    pygad-2.11.0.tar.gz(42.33 KB)
  • 2.10.2(Jan 15, 2021)

  • 2.10.1(Jan 11, 2021)

    Changes in PyGAD 2.10.1

    1. In the gene_space parameter, any None value (regardless of its index or axis), is replaced by a randomly generated number based on the 3 parameters init_range_low, init_range_high, and gene_type. So, the None value in [..., None, ...] or [..., [..., None, ...], ...] are replaced with random values. This gives more freedom in building the space of values for the genes.
    2. All the numbers passed to the gene_space parameter are casted to the type specified in the gene_type parameter.
    3. The numpy.uint data type is supported for the parameters that accept integer values.
    4. In the pygad.kerasga module, the model_weights_as_vector() function uses the trainable attribute of the model's layers to only return the trainable weights in the network. So, only the trainable layers with their trainable attribute set to True (trainable=True), which is the default value, have their weights evolved. All non-trainable layers with the trainable attribute set to False (trainable=False) will not be evolved. Thanks to Prof. Tamer A. Farrag for pointing about that at GitHub.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.10.1-py3-none-any.whl(40.59 KB)
    pygad-2.10.1.tar.gz(41.47 KB)
  • 2.10.0(Jan 4, 2021)

    1. Support of a new module pygad.torchga to train PyTorch models using PyGAD. Check its documentation.
    2. Support of adaptive mutation where the mutation rate is determined by the fitness value of each solution. Read the Adaptive Mutation section for more details. Also, read this paper: Libelli, S. Marsili, and P. Alba. "Adaptive mutation in genetic algorithms." Soft computing 4.2 (2000): 76-80.
    3. Before the run() method completes or exits, the fitness value of the best solution in the current population is appended to the best_solution_fitness list attribute. Note that the fitness value of the best solution in the initial population is already saved at the beginning of the list. So, the fitness value of the best solution is saved before the genetic algorithm starts and after it ends.
    4. When the parameter parent_selection_type is set to sss (steady-state selection), then a warning message is printed if the value of the keep_parents parameter is set to 0.
    5. More validations to the user input parameters.
    6. The default value of the mutation_percent_genes is set to the string "default" rather than the integer 10. This change helps to know whether the user explicitly passed a value to the mutation_percent_genes parameter or it is left to its default one. The "default" value is later translated into the integer 10.
    7. The mutation_percent_genes parameter is no longer accepting the value 0. It must be >0 and <=100.
    8. The built-in warnings module is used to show warning messages rather than just using the print() function.
    9. A new bool parameter called suppress_warnings is added to the constructor of the pygad.GA class. It allows the user to control whether the warning messages are printed or not. It defaults to False which means the messages are printed.
    10. A helper method called adaptive_mutation_population_fitness() is created to calculate the average fitness value used in adaptive mutation to filter the solutions.
    11. The best_solution() method accepts a new optional parameter called pop_fitness. It accepts a list of the fitness values of the solutions in the population. If None, then the cal_pop_fitness() method is called to calculate the fitness values of the population.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.10.0-py3-none-any.whl(40.21 KB)
    pygad-2.10.0.tar.gz(41.02 KB)
  • 2.9.0(Dec 5, 2020)

    Changes in PyGAD 2.9.0 (06 December 2020):

    1. The fitness values of the initial population are considered in the best_solutions_fitness attribute.
    2. An optional parameter named save_best_solutions is added. It defaults to False. When it is True, then the best solution after each generation is saved into an attribute named best_solutions. If False, then no solutions are saved and the best_solutions attribute will be empty.
    3. Scattered crossover is supported. To use it, assign the crossover_type parameter the value "scattered".
    4. NumPy arrays are now supported by the gene_space parameter.
    5. The following parameters (gene_type, crossover_probability, mutation_probability, delay_after_gen) can be assigned to a numeric value of any of these data types: int, float, numpy.int, numpy.int8, numpy.int16, numpy.int32, numpy.int64, numpy.float, numpy.float16, numpy.float32, or numpy.float64.
    Source code(tar.gz)
    Source code(zip)
    pygad-2.9.0-py3-none-any.whl(36.12 KB)
    pygad-2.9.0.tar.gz(37.39 KB)
  • 2.8.0(Sep 20, 2020)

  • 2.7.2(Sep 14, 2020)

  • 2.7.1(Sep 14, 2020)

  • 2.7.0(Sep 11, 2020)

    Changes in PyGAD 2.7.0 (11 September 2020):

    1. The learning_rate parameter in the pygad.nn.train() function defaults to 0.01.
    2. Added support of building neural networks for regression using the new parameter named problem_type. It is added as a parameter to both pygad.nn.train() and pygad.nn.predict() functions. The value of this parameter can be either classification or regression to define the problem type. It defaults to classification.
    3. The activation function for a layer can be set to the string "None" to refer that there is no activation function at this layer. As a result, the supported values for the activation function are "sigmoid", "relu", "softmax", and "None".

    To build a regression network using the pygad.nn module, just do the following:

    1. Set the problem_type parameter in the pygad.nn.train() and pygad.nn.predict() functions to the string "regression".
    2. Set the activation function for the output layer to the string "None". This sets no limits on the range of the outputs as it will be from -infinity to +infinity. If you are sure that all outputs will be nonnegative values, then use the ReLU function.

    Check the documentation of the pygad.nn module for an example that builds a neural network for regression. The regression example is also available at this GitHub project: https://github.com/ahmedfgad/NumPyANN

    To build and train a regression network using the pygad.gann module, do the following:

    1. Set the problem_type parameter in the pygad.nn.train() and pygad.nn.predict() functions to the string "regression".
    2. Set the output_activation parameter in the constructor of the pygad.gann.GANN class to "None".

    Check the documentation of the pygad.gann module for an example that builds and trains a neural network for regression. The regression example is also available at this GitHub project: https://github.com/ahmedfgad/NeuralGenetic

    To build a classification network, either ignore the problem_type parameter or set it to "classification" (default value). In this case, the activation function of the last layer can be set to any type (e.g. softmax).

    Source code(tar.gz)
    Source code(zip)
    pygad-2.7.0-py3-none-any.whl(34.52 KB)
    pygad-2.7.0.tar.gz(36.33 KB)
  • 2.6.0(Aug 6, 2020)

  • 2.5.0(Jul 28, 2020)

    Changes in PyGAD 2.5.0 - Release date: 19 July 2020

    1. 2 new optional parameters added to the constructor of the pygad.GA class which are crossover_probability and mutation_probability. While applying the crossover operation, each parent has a random value generated between 0.0 and 1.0. If this random value is less than or equal to the value assigned to the crossover_probability parameter, then the parent is selected for the crossover operation. For the mutation operation, a random value between 0.0 and 1.0 is generated for each gene in the solution. If this value is less than or equal to the value assigned to the mutation_probability, then this gene is selected for mutation.
    2. A new optional parameter named linewidth is added to the plot_result() method to specify the width of the curve in the plot. It defaults to 3.0.
    3. Previously, the indices of the genes selected for mutation was randomly generated once for all solutions within the generation. Currently, the genes' indices are randomly generated for each solution in the population. If the population has 4 solutions, the indices are randomly generated 4 times inside the single generation, 1 time for each solution.
    4. Previously, the position of the point(s) for the single-point and two-points crossover was(were) randomly selected once for all solutions within the generation. Currently, the position(s) is(are) randomly selected for each solution in the population. If the population has 4 solutions, the position(s) is(are) randomly generated 4 times inside the single generation, 1 time for each solution.
    5. A new optional parameter named gene_space as added to the pygad.GA class constructor. It is used to specify the possible values for each gene in case the user wants to restrict the gene values. It is useful if the gene space is restricted to a certain range or to discrete values.

    Assuming that all genes have the same global space which include the values 0.3, 5.2, -4, and 8, then those values can be assigned to the gene_space parameter as a list, tuple, or range. Here is a list assigned to this parameter. By doing that, then the gene values are restricted to those assigned to the gene_space parameter.

    gene_space = [0.3, 5.2, -4, 8]
    

    If some genes have different spaces, then gene_space should accept a nested list or tuple. In this case, its elements could be:

    1. List, tuple, or range: It holds the individual gene space.
    2. Number (int/float): A single value to be assigned to the gene. This means this gene will have the same value across all generations.
    3. None: A gene with its space set to None is initialized randomly from the range specified by the 2 parameters init_range_low and init_range_high. For mutation, its value is mutated based on a random value from the range specified by the 2 parameters random_mutation_min_val and random_mutation_max_val. If all elements in the gene_space parameter are None, the parameter will not have any effect.

    Assuming that a chromosome has 2 genes and each gene has a different value space. Then the gene_space could be assigned a nested list/tuple where each element determines the space of a gene. According to the next code, the space of the first gene is [0.4, -5] which has 2 values and the space for the second gene is [0.5, -3.2, 8.8, -9] which has 4 values.

    gene_space = [[0.4, -5], [0.5, -3.2, 8.2, -9]]
    

    For a 2 gene chromosome, if the first gene space is restricted to the discrete values from 0 to 4 and the second gene is restricted to the values from 10 to 19, then it could be specified according to the next code.

    gene_space = [range(5), range(10, 20)]
    

    If the user did not assign the initial population to the initial_population parameter, the initial population is created randomly based on the gene_space parameter. Moreover, the mutation is applied based on this parameter.

    Source code(tar.gz)
    Source code(zip)
    pygad-2.5.0-py3-none-any.whl(31.84 KB)
    pygad-2.5.0.tar.gz(32.35 KB)
  • 2.4.0(Jul 6, 2020)

    Changes in PyGAD 2.4.0:

    1. A new parameter named delay_after_gen is added which accepts a non-negative number specifying the time in seconds to wait after a generation completes and before going to the next generation. It defaults to 0.0 which means no delay after the generation.
    2. The passed function to the callback_generation parameter of the pygad.GA class constructor can terminate the execution of the genetic algorithm if it returns the string stop. This causes the run() method to stop.

    One important use case for that feature is to stop the genetic algorithm when a condition is met before passing though all the generations. The user may assigned a value of 100 to the num_generations parameter of the pygad.GA class constructor. Assuming that at generation 50, for example, a condition is met and the user wants to stop the execution before waiting the remaining 50 generations. To do that, just make the function passed to the callback_generation parameter to return the string stop.

    Here is an example of a function to be passed to the callback_generation parameter which stops the execution if the fitness value 70 is reached. The value 70 might be the best possible fitness value. After being reached, then there is no need to pass through more generations because no further improvement is possible.

    def func_generation(ga_instance):
        if ga_instance.best_solution()[1] >= 70:
            return "stop"
    
    Source code(tar.gz)
    Source code(zip)
    pygad-2.4.0-py3-none-any.whl(29.92 KB)
    pygad-2.4.0.tar.gz(30.43 KB)
  • 1.0.19(May 4, 2020)

    Changes in PyGAD 1.0.19 (4 May 2020):

    • The attributes are moved from the class scope to the instance scope.
    • Raising a ValueError exception on passing incorrect values to the parameters.
    • Two new parameters are added (init_rand_high and init_rand_high) allowing the user to customize the range from which the genes values in the initial population are selected.
    • The code object __code__ of the passed fitness function is checked to ensure it has the right number of parameters.
    Source code(tar.gz)
    Source code(zip)
    pygad-1.0.19-py3-none-any.whl(11.82 KB)
    pygad-1.0.19.tar.gz(15.97 KB)
Owner
Ahmed Gad
Ph.D. Student at uOttawa // Machine Learning Researcher & Technical Author https://amazon.com/author/ahmedgad
Ahmed Gad
A simple and lightweight genetic algorithm for optimization of any machine learning model

geneticml This package contains a simple and lightweight genetic algorithm for optimization of any machine learning model. Installation Use pip to ins

Allan Barcelos 8 Aug 10, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
API for RL algorithm design & testing of BCA (Building Control Agent) HVAC on EnergyPlus building energy simulator by wrapping their EMS Python API

RL - EmsPy (work In Progress...) The EmsPy Python package was made to facilitate Reinforcement Learning (RL) algorithm research for developing and tes

null 20 Jan 5, 2023
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
Grow Function: Generate 3D Stacked Bifurcating Double Deep Cellular Automata based organisms which differentiate using a Genetic Algorithm...

Grow Function: A 3D Stacked Bifurcating Double Deep Cellular Automata which differentiates using a Genetic Algorithm... TLDR;High Def Trees that you can mint as NFTs on Solana

Nathaniel Gibson 4 Oct 8, 2022
Nicholas Lee 3 Jan 9, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work

BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work. For this project, I used the sigmoid function as an activation function along with stochastic gradient descent to adjust the weights and biases.

Manas Bommakanti 1 Jan 22, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 3, 2023
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Pytorch implementation of "Forward Thinking: Building and Training Neural Networks One Layer at a Time"

forward-thinking-pytorch Pytorch implementation of Forward Thinking: Building and Training Neural Networks One Layer at a Time Requirements Python 2.7

Kim Heecheol 65 Oct 6, 2022
A Genetic Programming platform for Python with TensorFlow for wicked-fast CPU and GPU support.

Karoo GP Karoo GP is an evolutionary algorithm, a genetic programming application suite written in Python which supports both symbolic regression and

Kai Staats 149 Jan 9, 2023
Genetic Programming in Python, with a scikit-learn inspired API

Welcome to gplearn! gplearn implements Genetic Programming in Python, with a scikit-learn inspired and compatible API. While Genetic Programming (GP)

Trevor Stephens 1.3k Jan 3, 2023