🎯 A comprehensive gradient-free optimization framework written in Python

Overview

Build Status MIT License

Solid is a Python framework for gradient-free optimization.

It contains basic versions of many of the most common optimization algorithms that do not require the calculation of gradients, and allows for very rapid development using them.

It's a very versatile library that's great for learning, modifying, and of course, using out-of-the-box.

See the detailed documentation here.


Current Features:


Usage:

  • pip install solidpy
  • Import the relevant algorithm
  • Create a class that inherits from that algorithm, and that implements the necessary abstract methods
  • Call its .run() method, which always returns the best solution and its objective function value

Example:

from random import choice, randint, random
from string import lowercase
from Solid.EvolutionaryAlgorithm import EvolutionaryAlgorithm


class Algorithm(EvolutionaryAlgorithm):
    """
    Tries to get a randomly-generated string to match string "clout"
    """
    def _initial_population(self):
        return list(''.join([choice(lowercase) for _ in range(5)]) for _ in range(50))

    def _fitness(self, member):
        return float(sum(member[i] == "clout"[i] for i in range(5)))

    def _crossover(self, parent1, parent2):
        partition = randint(0, len(self.population[0]) - 1)
        return parent1[0:partition] + parent2[partition:]

    def _mutate(self, member):
        if self.mutation_rate >= random():
            member = list(member)
            member[randint(0,4)] = choice(lowercase)
            member = ''.join(member)
        return member


def test_algorithm():
    algorithm = Algorithm(.5, .7, 500, max_fitness=None)
    best_solution, best_objective_value = algorithm.run()

Testing

To run tests, look in the tests folder.

Use pytest; it should automatically find the test files.


Contributing

Feel free to send a pull request if you want to add any features or if you find a bug.

Check the issues tab for some potential things to do.

Comments
  • Run flake8 in warning only mode on Python 2 and 3

    Run flake8 in warning only mode on Python 2 and 3

    This will help us find and fix the Python 3 syntax errors (print_function, etc.) A step towards the resolution of https://github.com/100/Solid/issues/6

    opened by cclauss 6
  • Simulated annealing: bug in run method

    Simulated annealing: bug in run method

    Description of the bug

    The run() method of the SimulatedAnnealing class has a bug when the annealing method does not find a better state than the initial one.

    When does it happens

    The bug happens when the annealing algorithm fails to find a better state than the initial one. This can happen when the maximum number of steps is low or when the initial guess is already very good.

    What is the current behaviour

    The tuple returned by the run() method is (None, cost_of_initial_state).

    How to fix

    Add the line

    self.best_state = deepcopy(self.current_state)
    

    between L142 and L143.

    opened by nelimee 0
  • Correction of EA and GA for nondeterministic fitness functions

    Correction of EA and GA for nondeterministic fitness functions

    Correction of an issue that occurs when the fitness function is nondeterministic (shuffled cross-validation for example). In the _select_n method, the total fitness is computed according to the stored fitnesses, but the probs variable is computed according to recalculated fitness values. This slight change makes the method use the stored fitnesses at each time, which solves the problem. This also makes the method run much faster (especially when the fitness function has a high complexity) by removing unnecessary calls to _fitness.

    opened by miraaitsaada 0
  • More Algorithms

    More Algorithms

    Of course, more algorithms are always great.

    Some suggestions:

    • Coordinate descent
    • Ant colony optimization
    • Differential evolution
    • Cuckoo search
    • Cross-entropy method
    enhancement help wanted 
    opened by 100 0
  • Numerical Stabilitity

    Numerical Stabilitity

    It would be good to find all of the instances where the algorithms may be unstable and handle these cases appropriately (such as overflow). Some cases are handled, but there are probably more.

    bug help wanted 
    opened by 100 0
  • Better Testing?

    Better Testing?

    Currently, the testing just makes sure that the algorithm runs without error on a toy problem.

    It would be nice to do something more akin to unit testing, but I'm not quite sure how to do it in this situation since a lot of the testable functionality is provided by the user.

    enhancement help wanted question 
    opened by 100 0
Releases(0.11)
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

MADGRAD Optimization Method A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization pip install madgrad Try it out! A best

Meta Research 774 Dec 31, 2022
Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Yam Peleg 63 Sep 21, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Steven G. Johnson 1.4k Dec 25, 2022
A Free and Open Source Python Library for Multiobjective Optimization

Platypus What is Platypus? Platypus is a framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms (MOEAs)

Project Platypus 424 Dec 18, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 8, 2023
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

ZJU-VIPA 47 Jan 9, 2023
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus

free-duolingo-plus duolingo account creator that uses your invite code to get yo

null 1 Jan 6, 2022
Easy and comprehensive assessment of predictive power, with support for neuroimaging features

Documentation: https://raamana.github.io/neuropredict/ News As of v0.6, neuropredict now supports regression applications i.e. predicting continuous t

Pradeep Reddy Raamana 93 Nov 29, 2022
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

VITA 161 Jan 2, 2023
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains (IJCV submission)

wsss-analysis The code of: A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains, arXiv pre-print 2019 paper.

Lyndon Chan 48 Dec 18, 2022
[CVPR 2021 Oral] ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis [arxiv|pdf|v

Yinan He 78 Dec 22, 2022
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

?? Collection of Kaggle Solutions and Ideas ??

Farid Rashidi 2.3k Jan 8, 2023
[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study Codes for [Preprint] Bag of Tricks for Training Deeper Graph

VITA 101 Dec 29, 2022