Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

Overview

scikit-opt

PyPI Build Status codecov License Python Platform fork Downloads Discussions

Swarm Intelligence in Python
(Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,Artificial Fish Swarm Algorithm in Python)

install

pip install scikit-opt

For the current developer version:

git clone [email protected]:guofei9987/scikit-opt.git
cd scikit-opt
pip install .

Features

Feature1: UDF

UDF (user defined function) is available now!

For example, you just worked out a new type of selection function.
Now, your selection function is like this:
-> Demo code: examples/demo_ga_udf.py#s1

# step1: define your own operator:
def selection_tournament(algorithm, tourn_size):
    FitV = algorithm.FitV
    sel_index = []
    for i in range(algorithm.size_pop):
        aspirants_index = np.random.choice(range(algorithm.size_pop), size=tourn_size)
        sel_index.append(max(aspirants_index, key=lambda i: FitV[i]))
    algorithm.Chrom = algorithm.Chrom[sel_index, :]  # next generation
    return algorithm.Chrom

Import and build ga
-> Demo code: examples/demo_ga_udf.py#s2

import numpy as np
from sko.GA import GA, GA_TSP

demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + (x[2] - 0.5) ** 2
ga = GA(func=demo_func, n_dim=3, size_pop=100, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2],
        precision=[1e-7, 1e-7, 1])

Regist your udf to GA
-> Demo code: examples/demo_ga_udf.py#s3

ga.register(operator_name='selection', operator=selection_tournament, tourn_size=3)

scikit-opt also provide some operators
-> Demo code: examples/demo_ga_udf.py#s4

from sko.operators import ranking, selection, crossover, mutation

ga.register(operator_name='ranking', operator=ranking.ranking). \
    register(operator_name='crossover', operator=crossover.crossover_2point). \
    register(operator_name='mutation', operator=mutation.mutation)

Now do GA as usual
-> Demo code: examples/demo_ga_udf.py#s5

best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

Until Now, the udf surport crossover, mutation, selection, ranking of GA scikit-opt provide a dozen of operators, see here

For advanced users:

-> Demo code: examples/demo_ga_udf.py#s6

class MyGA(GA):
    def selection(self, tourn_size=3):
        FitV = self.FitV
        sel_index = []
        for i in range(self.size_pop):
            aspirants_index = np.random.choice(range(self.size_pop), size=tourn_size)
            sel_index.append(max(aspirants_index, key=lambda i: FitV[i]))
        self.Chrom = self.Chrom[sel_index, :]  # next generation
        return self.Chrom

    ranking = ranking.ranking


demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + (x[2] - 0.5) ** 2
my_ga = MyGA(func=demo_func, n_dim=3, size_pop=100, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2],
        precision=[1e-7, 1e-7, 1])
best_x, best_y = my_ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

feature2: continue to run

(New in version 0.3.6)
Run an algorithm for 10 iterations, and then run another 20 iterations base on the 10 iterations before:

from sko.GA import GA

func = lambda x: x[0] ** 2
ga = GA(func=func, n_dim=1)
ga.run(10)
ga.run(20)

feature3: 4-ways to accelerate

  • vectorization
  • multithreading
  • multiprocessing
  • cached

see https://github.com/guofei9987/scikit-opt/blob/master/examples/example_function_modes.py

feature4: GPU computation

We are developing GPU computation, which will be stable on version 1.0.0
An example is already available: https://github.com/guofei9987/scikit-opt/blob/master/examples/demo_ga_gpu.py

Quick start

1. Differential Evolution

Step1:define your problem
-> Demo code: examples/demo_de.py#s1

'''
min f(x1, x2, x3) = x1^2 + x2^2 + x3^2
s.t.
    x1*x2 >= 1
    x1*x2 <= 5
    x2 + x3 = 1
    0 <= x1, x2, x3 <= 5
'''


def obj_func(p):
    x1, x2, x3 = p
    return x1 ** 2 + x2 ** 2 + x3 ** 2


constraint_eq = [
    lambda x: 1 - x[1] - x[2]
]

constraint_ueq = [
    lambda x: 1 - x[0] * x[1],
    lambda x: x[0] * x[1] - 5
]

Step2: do Differential Evolution
-> Demo code: examples/demo_de.py#s2

from sko.DE import DE

de = DE(func=obj_func, n_dim=3, size_pop=50, max_iter=800, lb=[0, 0, 0], ub=[5, 5, 5],
        constraint_eq=constraint_eq, constraint_ueq=constraint_ueq)

best_x, best_y = de.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

2. Genetic Algorithm

Step1:define your problem
-> Demo code: examples/demo_ga.py#s1

import numpy as np


def schaffer(p):
    '''
    This function has plenty of local minimum, with strong shocks
    global minimum at (0,0) with value 0
    '''
    x1, x2 = p
    x = np.square(x1) + np.square(x2)
    return 0.5 + (np.square(np.sin(x)) - 0.5) / np.square(1 + 0.001 * x)

Step2: do Genetic Algorithm
-> Demo code: examples/demo_ga.py#s2

from sko.GA import GA

ga = GA(func=schaffer, n_dim=2, size_pop=50, max_iter=800, lb=[-1, -1], ub=[1, 1], precision=1e-7)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

print(ga)
# # %% Plot the result
# import pandas as pd
# import matplotlib.pyplot as plt
#
# Y_history = pd.DataFrame(ga.all_history_Y)
# fig, ax = plt.subplots(2, 1)
# ax[0].plot(Y_history.index, Y_history.values, '.', color='red')
# Y_history.min(axis=1).cummin().plot(kind='line')
# plt.show()

a=1
if a!=1 :
    print(1)```

**Step3**: plot the result  
-> Demo code: [examples/demo_ga.py#s3](https://github.com/guofei9987/scikit-opt/blob/master/examples/demo_ga.py#LNone)
```python

Figure_1-1

2.2 Genetic Algorithm for TSP(Travelling Salesman Problem)

Just import the GA_TSP, it overloads the crossover, mutation to solve the TSP

Step1: define your problem. Prepare your points coordinate and the distance matrix.
Here I generate the data randomly as a demo:
-> Demo code: examples/demo_ga_tsp.py#s1

import numpy as np
from scipy import spatial
import matplotlib.pyplot as plt

num_points = 50

points_coordinate = np.random.rand(num_points, 2)  # generate coordinate of points
distance_matrix = spatial.distance.cdist(points_coordinate, points_coordinate, metric='euclidean')


def cal_total_distance(routine):
    '''The objective function. input routine, return total distance.
    cal_total_distance(np.arange(num_points))
    '''
    num_points, = routine.shape
    return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])

Step2: do GA
-> Demo code: examples/demo_ga_tsp.py#s2

from sko.GA import GA_TSP

ga_tsp = GA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=50, max_iter=500, prob_mut=1)
best_points, best_distance = ga_tsp.run()

Step3: Plot the result:
-> Demo code: examples/demo_ga_tsp.py#s3

fig, ax = plt.subplots(1, 2)
best_points_ = np.concatenate([best_points, [best_points[0]]])
best_points_coordinate = points_coordinate[best_points_, :]
ax[0].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1], 'o-r')
ax[1].plot(ga_tsp.generation_best_Y)
plt.show()

GA_TPS

3. PSO(Particle swarm optimization)

3.1 PSO

Step1: define your problem:
-> Demo code: examples/demo_pso.py#s1

def demo_func(x):
    x1, x2, x3 = x
    return x1 ** 2 + (x2 - 0.05) ** 2 + x3 ** 2

Step2: do PSO
-> Demo code: examples/demo_pso.py#s2

from sko.PSO import PSO

pso = PSO(func=demo_func, n_dim=3, pop=40, max_iter=150, lb=[0, -1, 0.5], ub=[1, 1, 1], w=0.8, c1=0.5, c2=0.5)
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)

Step3: Plot the result
-> Demo code: examples/demo_pso.py#s3

import matplotlib.pyplot as plt

plt.plot(pso.gbest_y_hist)
plt.show()

PSO_TPS

3.2 PSO with nonlinear constraint

If you need nolinear constraint like (x[0] - 1) ** 2 + (x[1] - 0) ** 2 - 0.5 ** 2<=0
Codes are like this:

constraint_ueq = (
    lambda x: (x[0] - 1) ** 2 + (x[1] - 0) ** 2 - 0.5 ** 2
    ,
)
pso = PSO(func=demo_func, n_dim=2, pop=40, max_iter=max_iter, lb=[-2, -2], ub=[2, 2]
          , constraint_ueq=constraint_ueq)

Note that, you can add more then one nonlinear constraint. Just add it to constraint_ueq

More over, we have an animation:
pso_ani
see examples/demo_pso_ani.py

4. SA(Simulated Annealing)

4.1 SA for multiple function

Step1: define your problem
-> Demo code: examples/demo_sa.py#s1

demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + x[2] ** 2

Step2: do SA
-> Demo code: examples/demo_sa.py#s2

from sko.SA import SA

sa = SA(func=demo_func, x0=[1, 1, 1], T_max=1, T_min=1e-9, L=300, max_stay_counter=150)
best_x, best_y = sa.run()
print('best_x:', best_x, 'best_y', best_y)

Step3: Plot the result
-> Demo code: examples/demo_sa.py#s3

import matplotlib.pyplot as plt
import pandas as pd

plt.plot(pd.DataFrame(sa.best_y_history).cummin(axis=0))
plt.show()

sa

Moreover, scikit-opt provide 3 types of Simulated Annealing: Fast, Boltzmann, Cauchy. See more sa

4.2 SA for TSP

Step1: oh, yes, define your problems. To boring to copy this step.

Step2: DO SA for TSP
-> Demo code: examples/demo_sa_tsp.py#s2

from sko.SA import SA_TSP

sa_tsp = SA_TSP(func=cal_total_distance, x0=range(num_points), T_max=100, T_min=1, L=10 * num_points)

best_points, best_distance = sa_tsp.run()
print(best_points, best_distance, cal_total_distance(best_points))

Step3: plot the result
-> Demo code: examples/demo_sa_tsp.py#s3

from matplotlib.ticker import FormatStrFormatter

fig, ax = plt.subplots(1, 2)

best_points_ = np.concatenate([best_points, [best_points[0]]])
best_points_coordinate = points_coordinate[best_points_, :]
ax[0].plot(sa_tsp.best_y_history)
ax[0].set_xlabel("Iteration")
ax[0].set_ylabel("Distance")
ax[1].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1],
           marker='o', markerfacecolor='b', color='c', linestyle='-')
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%.3f'))
ax[1].yaxis.set_major_formatter(FormatStrFormatter('%.3f'))
ax[1].set_xlabel("Longitude")
ax[1].set_ylabel("Latitude")
plt.show()

sa

More: Plot the animation:

sa
see examples/demo_sa_tsp.py

5. ACA (Ant Colony Algorithm) for tsp

-> Demo code: examples/demo_aca_tsp.py#s2

from sko.ACA import ACA_TSP

aca = ACA_TSP(func=cal_total_distance, n_dim=num_points,
              size_pop=50, max_iter=200,
              distance_matrix=distance_matrix)

best_x, best_y = aca.run()

ACA

6. immune algorithm (IA)

-> Demo code: examples/demo_ia.py#s2

from sko.IA import IA_TSP

ia_tsp = IA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=500, max_iter=800, prob_mut=0.2,
                T=0.7, alpha=0.95)
best_points, best_distance = ia_tsp.run()
print('best routine:', best_points, 'best_distance:', best_distance)

IA

7. Artificial Fish Swarm Algorithm (AFSA)

-> Demo code: examples/demo_afsa.py#s1

def func(x):
    x1, x2 = x
    return 1 / x1 ** 2 + x1 ** 2 + 1 / x2 ** 2 + x2 ** 2


from sko.AFSA import AFSA

afsa = AFSA(func, n_dim=2, size_pop=50, max_iter=300,
            max_try_num=100, step=0.5, visual=0.3,
            q=0.98, delta=0.5)
best_x, best_y = afsa.run()
print(best_x, best_y)

Projects using scikit-opt

Issues
  • GA中dim不能为1?

    GA中dim不能为1?

    您好,我对您给出的demo,做了一些改动,把自变量个数设为1。

    def schaffer(p):
        x = p
        # return 0.5 + (np.sin(x) - 0.5) / np.square(1 + 0.001 * x)
        return x + 10 * np.sin(5 * x) + 7 * np.cos(4 * x)
    
     ga = GA(func=schaffer, n_dim=1, size_pop=50, max_iter=800, lb=[-1], ub=[1], precision=1e-7)
    

    但是提示报错:

    Traceback (most recent call last): File "...", line 25, in best_x, best_y = ga.run() File "...", line 80, in run self.crossover() File "...\venv\lib\site-packages\sko\operators\crossover.py", line 44, in crossover_2point_bit Chrom1 ^= mask2 ValueError: non-broadcastable output operand with shape (25,1,25) doesn't match the broadcast shape (25,25,25)

    我理解的schaffer是用来定义自己的目标函数的这样对吗? 那么为什么dim不能设为1呢? 另外您给的例子中

        x1, x2 = p
        x = np.square(x1) + np.square(x2)
        return 0.5 + (np.sin(x) - 0.5) / np.square(1 + 0.001 * x)
    

    如果x1,x2为自变量的话,为什么要取一个x为中间量? 仅仅是为了方便书写吗? 感谢!

    question wontfix 
    opened by zhangxiao123qqq 13
  • 约束关系没有发挥作用

    约束关系没有发挥作用

    非常感谢您的及时回答。 您好,我使用demo中约束关系列表为差分算法施加了约束关系,但是计算出来的结果好像约束关系没有发挥作用。 constraint_ueq = [] for i in range(16): for j in range(15): constraint_ueq.append(lambda x: x[0 + j+i16] - x[1 + j+i16]) for i in range(16): for j in range(15): constraint_ueq.append(lambda x:x[0+j16+i] - x[16+j16+i]) for i in range(6): constraint_ueq.append(lambda x:x[256+i] - x[257+i]) for i in range(11): constraint_ueq.append(lambda x:x[264+i] - x[263+i]) 我在做一个汽车系统的自动标定,涉及到一个256的曲面和一个7的曲线,12的曲线。 曲面自身和曲线自身有平顺性的要求(沿x轴和y轴递增),所以我用以上关系施加了不等式约束,但是优化出来的结果,好像没有反应出这种约束关系。我查看了差分算法的源代码,使用的是罚函数做约束施加?代码如下? if not self.has_constraint: self.Y = self.Y_raw else: # constraint penalty_eq = np.array([np.sum(np.abs([c_i(x) for c_i in self.constraint_eq])) for x in self.X]) penalty_ueq = np.array([np.sum(np.abs([max(0, c_i(x)) for c_i in self.constraint_ueq])) for x in self.X]) self.Y = self.Y_raw + 1e5 * penalty_eq + 1e5 * penalty_ueq 请问如何让我的约束关系在算法的种群生成,变异、交叉、选择中发挥作用? 非常感谢您的及时回答。

    question resolved 
    opened by MaCshan 11
  • 目前的并行化似乎只是多线程,能否采用多进程以充分利用多核?

    目前的并行化似乎只是多线程,能否采用多进程以充分利用多核?

    如果优化函数是计算密集型的,经过测试,目前的并行化并不能节省时间。

    opened by meihaojie 8
  • capacitated vrp with time windows

    capacitated vrp with time windows

    Hi, thanks for the package with multiple metaheuristics which can be implemented with ease. I had a doubt whether we can define our own Vehicle routing problems with constraints like time windows and use this package. Or we have to define our own custom functions? Thanks!

    question resolved 
    opened by ameya1995 8
  • 遗传算法整数规划上下界出错

    遗传算法整数规划上下界出错

    在我的程序中,一共有15个变量,整数规划取值范围是从0~1363,因此我设置lb与ub未0、1363,但是我发现程序会生成大于1363的数字,例如生成了 IndexError: index 1901 is out of bounds for axis 0 with size 1363 但是我发现只要把ub边界设置小于1000,就没有问题

    from sko.GA import GA
    K=15
    ga = GA(func=cost_func, n_dim=K, max_iter=500, lb=[0 for i in range(K)], ub=[1363 for i in range(K)], precision=[1 for i in range(K)])
    rs = ga.run()
    print(rs)
    
    bug resolved 
    opened by phoenixsfly 8
  • AFSA的bug

    AFSA的bug

    有几个地方的best_Y没有更新

    bug resolved 
    opened by bzxgcs 7
  • Parallelizing the run of function passed into GA constructor

    Parallelizing the run of function passed into GA constructor

    Hi,

    I would like to use GA for statsmodels sarimax function. I was wondering, how would it be possible to parallelize the fitting process of each individual in GA using torch. Crossovers, selections are parallelized however the function part is left out for such functionality. I would be glad if you can help me.

    ` import torch import numpy as np import pandas as pd from scipy.stats import norm import statsmodels.api as sm import matplotlib.pyplot as plt from datetime import datetime import requests from io import BytesIO

    # Dataset
    wpi1 = requests.get('https://www.stata-press.com/data/r12/wpi1.dta').content
    data = pd.read_stata(BytesIO(wpi1))
    data.index = data.t
    # Set the frequency
    data.index.freq="QS-OCT"
    
    def arima(X):
          X = [int(x) for x in X]
          order = [X[0],X[1],X[2]]
    
          mod = sm.tsa.statespace.SARIMAX(data['wpi'], 
                                          order=order, 
                                          seasonal_order = [X[3],X[4],X[5],6],
                                          enforce_stationarity=False,
                                          enforce_invertibility=False
                                          )
          res = mod.fit(disp=False)
          #print(X)
          #print(res.aic)
          return res.aic
    
    
    
    ga = GA(func=arima, n_dim=6, size_pop=50, max_iter=50, lb=[0,0,0,0,0,0], ub=[6,4,6,4,3,4], precision=1)
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    
    ga.to(device=device)
    
    ga.run()
    

    `

    feature request 
    opened by koralturkk 7
  • 请问整数规划该怎么写?

    请问整数规划该怎么写?

    demo_func=lambda x: x[0]**2 + x[1]**2 + x[2]**2
    ga = GA(func=demo_func,n_dim=3, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2])
    best_x, best_y = ga.fit()
    

    如果 x[0] 只能取整数, 代码应该怎么写? 谢谢

    question resolved 
    opened by bjmwang 7
  • modified for instance method, now we can build the fitness function inside a class

    modified for instance method, now we can build the fitness function inside a class

    你好, 我在repo中做了一个小的修改。 在项目中,fitness function必须编写在类中。 但是跑起来总有问题。 所以我改了一下tool.py里面的func_transfer 你看一下, 有没有什么问题.

    opened by Yingliangzhe 6
  • 如何在第一层搜索里面再嵌套一层局部搜索?

    如何在第一层搜索里面再嵌套一层局部搜索?

    我外层搜索的目标函数是这样写的,我需要根据ant.step4()得到的结果(存储在ant的属性当中)再进行一次搜索,请问这个应该如何去写呢? def obj_func(x): jobs=read_data(Const.filename)#读取作业信息 ant = Antibody(jobs, x)#创建抗体实例 ant.step1()#按照随机键进行分组 ant.step2()#根据分组计算发车时间 ant.step3() ant.step4() #ant_a=Antibody_a(ant) ant.step5() ant.cal_obj()#计算该抗体目标函数值 return ant.obj

    opened by cliff0149 6
  • Update args.md

    Update args.md

    opened by lbiscuola 1
  • 报错ValueError: cannot find context for 'fork'

    报错ValueError: cannot find context for 'fork'

    Describe the bug(问题描述) 报错ValueError: cannot find context for 'fork'

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. 将项目下载到本地
    2. 运行demo_afsa.py
    3. See error.详细报错:
    D:\programs\Anaconda3\python.exe D:/programs/下载/scikit-opt-master/examples/demo_afsa.py
    Traceback (most recent call last):
      File "D:\programs\Anaconda3\lib\multiprocessing\context.py", line 190, in get_context
        ctx = _concrete_contexts[method]
    KeyError: 'fork'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "D:/programs/下载/scikit-opt-master/examples/demo_afsa.py", line 6, in <module>
        from sko.AFSA import AFSA
      File "D:\programs\下载\scikit-opt-master\sko\__init__.py", line 3, in <module>
        from . import DE, GA, PSO, SA, ACA, AFSA, IA, tools
      File "D:\programs\下载\scikit-opt-master\sko\DE.py", line 11, in <module>
        from .GA import GeneticAlgorithmBase, GA
      File "D:\programs\下载\scikit-opt-master\sko\GA.py", line 9, in <module>
        from sko.tools import func_transformer
      File "D:\programs\下载\scikit-opt-master\sko\tools.py", line 8, in <module>
        multiprocessing.set_start_method('fork')
      File "D:\programs\Anaconda3\lib\multiprocessing\context.py", line 246, in set_start_method
        self._actual_context = self.get_context(method)
      File "D:\programs\Anaconda3\lib\multiprocessing\context.py", line 238, in get_context
        return super().get_context(method)
      File "D:\programs\Anaconda3\lib\multiprocessing\context.py", line 192, in get_context
        raise ValueError('cannot find context for %r' % method)
    ValueError: cannot find context for 'fork'
    
    Process finished with exit code 1
    
    

    Operating environment(运行环境):

    • windwos 10
    • python version 3.6
    • numpy 1.19.2
    • scipy version 1.1.0
    bug 
    opened by zanshuxun 2
  • ValueError: operands could not be broadcast together with shapes (400,1) (20,30) (20,30)

    ValueError: operands could not be broadcast together with shapes (400,1) (20,30) (20,30)

    我在算法内部又添加一层搜索之后就出现了这个报错,但是报错的是外层的搜索而不是内层的。问题是在selection这里,似乎是说对于np.where函数,条件矩阵和后面的矩阵维数不同。但是我将条件矩阵打印出来发现它就是一个(20,1)的矩阵,不知道为什么程序认定它为(400,1) [[ True] [ True] [ True] [ True] [ True] [False] [False] [ True] [ True] [False] [ True] [False] [ True] [ True] [False] [False] [ True] [ True] [False] [ True]] 迭代次数 0 最优值 [27.6] Traceback (most recent call last): File "D:/Desktop/半集成式VRPSDP/代码/main.py", line 121, in best_x, best_y = de.run() File "D:\841370\anaconda\lib\site-packages\sko\DE.py", line 86, in run self.selection() File "D:\841370\anaconda\lib\site-packages\sko\DE.py", line 76, in selection self.X = np.where((f_X < f_U).reshape(-1, 1), X, U) File "<array_function internals>", line 5, in where ValueError: operands could not be broadcast together with shapes (400,1) (20,30) (20,30)

    此外还有一个报错VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return np.array([func_cached(tuple(x)) for x in X])没有找到原因。

    opened by cliff0149 4
  • 建议添加保留最好n个个体的方法

    建议添加保留最好n个个体的方法

    微信截图_20210330105737 在使用遗传算法的时候,发现好的个体经常被刷掉,然后不断重复在较低水平,如果可以让最好的N个个体保留下来可以避免这种情况

    feature request 
    opened by qiujunhan 0
  • 请问为什么遗传算法验算到最后会不断地在两个极端跳跃

    请问为什么遗传算法验算到最后会不断地在两个极端跳跃

    QQ截图20210328105811 QQ截图20210328105727 QQ截图20210328105703 QQ截图20210328105652 如图所示,计算到后面P给出的值都在这两组数据不断重复

    question resolved 
    opened by qiujunhan 8
  • PSO如何设置固定的随机种子

    PSO如何设置固定的随机种子

    这样就不会每次都出现一个随机的结果了,咋搞呢

    feature request 
    opened by imhithanks 2
  • 请问在使用GA做整数规划时,如何使得初始化的值均匀分布?When using GA for integer programming, how to make the initialized values evenly distributed?

    请问在使用GA做整数规划时,如何使得初始化的值均匀分布?When using GA for integer programming, how to make the initialized values evenly distributed?

    如题,我在使用GA做整数规划时,调用GA(func=my_fuc, n_dim=100, size_pop=50, max_iter=1000, lb=[0]*100, ub=[3]*100, precision=[1]*100) ,发现初始化的值大多集中在边界处(也就是3),看了一下源码,应该是因为GA.py源码里 X = np.where(X > self.ub, self.ub, X), 请问这样设置是否必要?或者能否使得超过边界值以后分布得更均匀一些,比如使用求余之类得操作?还有,其他方法能否适用整数规划?感谢!

    feature request 
    opened by Zhaominghao1314 4
  • 请问如何优化神经网络代码呢?

    请问如何优化神经网络代码呢?

    最近想用神经网络,却并不会如何用该模块对神经网络进行优化,请问有示范的例子吗?我用的是keras或者pytorch

    question resolved 
    opened by Zhao-YB 1
Releases(v0.6.3)
(AAAI' 20) A Python Toolbox for Machine Learning Model Combination

combo: A Python Toolbox for Machine Learning Model Combination Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License

Yue Zhao 498 Jun 12, 2021
Extra blocks for scikit-learn pipelines.

scikit-lego We love scikit learn but very often we find ourselves writing custom transformers, metrics and models. The goal of this project is to atte

vincent d warmerdam 522 Jun 13, 2021
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

null 5.3k Jun 14, 2021
machine learning with logical rules in Python

skope-rules Skope-rules is a Python machine learning module built on top of scikit-learn and distributed under the 3-Clause BSD license. Skope-rules a

null 397 Jun 14, 2021
Multivariate imputation and matrix completion algorithms implemented in Python

A variety of matrix completion and imputation algorithms implemented in Python 3.6. To install: pip install fancyimpute Do not use conda. We don't sup

Alex Rubinsteyn 969 Jun 10, 2021
scikit-learn inspired API for CRFsuite

sklearn-crfsuite sklearn-crfsuite is a thin CRFsuite (python-crfsuite) wrapper which provides interface simlar to scikit-learn. sklearn_crfsuite.CRF i

null 371 May 30, 2021
A library of sklearn compatible categorical variable encoders

Categorical Encoding Methods A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques

null 1.7k Jun 11, 2021
Fast solver for L1-type problems: Lasso, sparse Logisitic regression, Group Lasso, weighted Lasso, Multitask Lasso, etc.

celer Fast algorithm to solve Lasso-like problems with dual extrapolation. Currently, the package handles the following problems: Lasso weighted Lasso

null 117 Jun 6, 2021
scikit-learn cross validators for iterative stratification of multilabel data

iterative-stratification iterative-stratification is a project that provides scikit-learn compatible cross validators with stratification for multilab

null 565 Jun 13, 2021
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

null 182 Jun 3, 2021
A Python library for dynamic classifier and ensemble selection

DESlib DESlib is an easy-to-use ensemble learning library focused on the implementation of the state-of-the-art techniques for dynamic classifier and

null 336 May 30, 2021
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

Sebastian Raschka 3.5k Jun 13, 2021
A scikit-learn based module for multi-label et. al. classification

scikit-multilearn scikit-multilearn is a Python module capable of performing multi-label learning tasks. It is built on-top of various scientific Pyth

null 660 Jun 8, 2021
Large-scale linear classification, regression and ranking in Python

lightning lightning is a library for large-scale linear classification, regression and ranking in Python. Highlights: follows the scikit-learn API con

null 1.5k May 31, 2021