Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

Overview

scikit-opt

PyPI Build Status codecov License Python Platform Downloads

Swarm Intelligence in Python
(Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,Artificial Fish Swarm Algorithm in Python)

install

pip install scikit-opt

For the current developer version:

git clone [email protected]:guofei9987/scikit-opt.git
cd scikit-opt
pip install .

Features

Feature1: UDF

UDF (user defined function) is available now!

For example, you just worked out a new type of selection function.
Now, your selection function is like this:
-> Demo code: examples/demo_ga_udf.py#s1

# step1: define your own operator:
def selection_tournament(algorithm, tourn_size):
    FitV = algorithm.FitV
    sel_index = []
    for i in range(algorithm.size_pop):
        aspirants_index = np.random.choice(range(algorithm.size_pop), size=tourn_size)
        sel_index.append(max(aspirants_index, key=lambda i: FitV[i]))
    algorithm.Chrom = algorithm.Chrom[sel_index, :]  # next generation
    return algorithm.Chrom

Import and build ga
-> Demo code: examples/demo_ga_udf.py#s2

import numpy as np
from sko.GA import GA, GA_TSP

demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + (x[2] - 0.5) ** 2
ga = GA(func=demo_func, n_dim=3, size_pop=100, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2],
        precision=[1e-7, 1e-7, 1])

Regist your udf to GA
-> Demo code: examples/demo_ga_udf.py#s3

ga.register(operator_name='selection', operator=selection_tournament, tourn_size=3)

scikit-opt also provide some operators
-> Demo code: examples/demo_ga_udf.py#s4

from sko.operators import ranking, selection, crossover, mutation

ga.register(operator_name='ranking', operator=ranking.ranking). \
    register(operator_name='crossover', operator=crossover.crossover_2point). \
    register(operator_name='mutation', operator=mutation.mutation)

Now do GA as usual
-> Demo code: examples/demo_ga_udf.py#s5

best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

Until Now, the udf surport crossover, mutation, selection, ranking of GA scikit-opt provide a dozen of operators, see here

For advanced users:

-> Demo code: examples/demo_ga_udf.py#s6

class MyGA(GA):
    def selection(self, tourn_size=3):
        FitV = self.FitV
        sel_index = []
        for i in range(self.size_pop):
            aspirants_index = np.random.choice(range(self.size_pop), size=tourn_size)
            sel_index.append(max(aspirants_index, key=lambda i: FitV[i]))
        self.Chrom = self.Chrom[sel_index, :]  # next generation
        return self.Chrom

    ranking = ranking.ranking


demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + (x[2] - 0.5) ** 2
my_ga = MyGA(func=demo_func, n_dim=3, size_pop=100, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2],
        precision=[1e-7, 1e-7, 1])
best_x, best_y = my_ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

feature2: GPU computation

We are developing GPU computation, which will be stable on version 1.0.0
An example is already available: https://github.com/guofei9987/scikit-opt/blob/master/examples/demo_ga_gpu.py

feature3: continue to run

(New in version 0.3.6)
Run an algorithm for 10 iterations, and then run another 20 iterations base on the 10 iterations before:

from sko.GA import GA

func = lambda x: x[0] ** 2
ga = GA(func=func, n_dim=1)
ga.run(10)
ga.run(20)

Quick start

1. Differential Evolution

Step1:define your problem
-> Demo code: examples/demo_de.py#s1

'''
min f(x1, x2, x3) = x1^2 + x2^2 + x3^2
s.t.
    x1*x2 >= 1
    x1*x2 <= 5
    x2 + x3 = 1
    0 <= x1, x2, x3 <= 5
'''


def obj_func(p):
    x1, x2, x3 = p
    return x1 ** 2 + x2 ** 2 + x3 ** 2


constraint_eq = [
    lambda x: 1 - x[1] - x[2]
]

constraint_ueq = [
    lambda x: 1 - x[0] * x[1],
    lambda x: x[0] * x[1] - 5
]

Step2: do Differential Evolution
-> Demo code: examples/demo_de.py#s2

from sko.DE import DE

de = DE(func=obj_func, n_dim=3, size_pop=50, max_iter=800, lb=[0, 0, 0], ub=[5, 5, 5],
        constraint_eq=constraint_eq, constraint_ueq=constraint_ueq)

best_x, best_y = de.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

2. Genetic Algorithm

Step1:define your problem
-> Demo code: examples/demo_ga.py#s1

import numpy as np


def schaffer(p):
    '''
    This function has plenty of local minimum, with strong shocks
    global minimum at (0,0) with value 0
    '''
    x1, x2 = p
    x = np.square(x1) + np.square(x2)
    return 0.5 + (np.square(np.sin(x)) - 0.5) / np.square(1 + 0.001 * x)

Step2: do Genetic Algorithm
-> Demo code: examples/demo_ga.py#s2

from sko.GA import GA

ga = GA(func=schaffer, n_dim=2, size_pop=50, max_iter=800, lb=[-1, -1], ub=[1, 1], precision=1e-7)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)

Step3: plot the result
-> Demo code: examples/demo_ga.py#s3

import pandas as pd
import matplotlib.pyplot as plt

Y_history = pd.DataFrame(ga.all_history_Y)
fig, ax = plt.subplots(2, 1)
ax[0].plot(Y_history.index, Y_history.values, '.', color='red')
Y_history.min(axis=1).cummin().plot(kind='line')
plt.show()

Figure_1-1

2.2 Genetic Algorithm for TSP(Travelling Salesman Problem)

Just import the GA_TSP, it overloads the crossover, mutation to solve the TSP

Step1: define your problem. Prepare your points coordinate and the distance matrix.
Here I generate the data randomly as a demo:
-> Demo code: examples/demo_ga_tsp.py#s1

import numpy as np
from scipy import spatial
import matplotlib.pyplot as plt

num_points = 50

points_coordinate = np.random.rand(num_points, 2)  # generate coordinate of points
distance_matrix = spatial.distance.cdist(points_coordinate, points_coordinate, metric='euclidean')


def cal_total_distance(routine):
    '''The objective function. input routine, return total distance.
    cal_total_distance(np.arange(num_points))
    '''
    num_points, = routine.shape
    return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])

Step2: do GA
-> Demo code: examples/demo_ga_tsp.py#s2

from sko.GA import GA_TSP

ga_tsp = GA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=50, max_iter=500, prob_mut=1)
best_points, best_distance = ga_tsp.run()

Step3: Plot the result:
-> Demo code: examples/demo_ga_tsp.py#s3

fig, ax = plt.subplots(1, 2)
best_points_ = np.concatenate([best_points, [best_points[0]]])
best_points_coordinate = points_coordinate[best_points_, :]
ax[0].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1], 'o-r')
ax[1].plot(ga_tsp.generation_best_Y)
plt.show()

GA_TPS

3. PSO(Particle swarm optimization)

3.1 PSO

Step1: define your problem:
-> Demo code: examples/demo_pso.py#s1

def demo_func(x):
    x1, x2, x3 = x
    return x1 ** 2 + (x2 - 0.05) ** 2 + x3 ** 2

Step2: do PSO
-> Demo code: examples/demo_pso.py#s2

from sko.PSO import PSO

pso = PSO(func=demo_func, n_dim=3, pop=40, max_iter=150, lb=[0, -1, 0.5], ub=[1, 1, 1], w=0.8, c1=0.5, c2=0.5)
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)

Step3: Plot the result
-> Demo code: examples/demo_pso.py#s3

import matplotlib.pyplot as plt

plt.plot(pso.gbest_y_hist)
plt.show()

PSO_TPS

3.2 PSO with nonlinear constraint

If you need nolinear constraint like (x[0] - 1) ** 2 + (x[1] - 0) ** 2 - 0.5 ** 2<=0
Codes are like this:

constraint_ueq = (
    lambda x: (x[0] - 1) ** 2 + (x[1] - 0) ** 2 - 0.5 ** 2
    ,
)
pso = PSO(func=demo_func, n_dim=2, pop=40, max_iter=max_iter, lb=[-2, -2], ub=[2, 2]
          , constraint_ueq=constraint_ueq)

Note that, you can add more then one nonlinear constraint. Just add it to constraint_ueq

More over, we have an animation:
pso_ani
see examples/demo_pso_ani.py

4. SA(Simulated Annealing)

4.1 SA for multiple function

Step1: define your problem
-> Demo code: examples/demo_sa.py#s1

demo_func = lambda x: x[0] ** 2 + (x[1] - 0.05) ** 2 + x[2] ** 2

Step2: do SA
-> Demo code: examples/demo_sa.py#s2

from sko.SA import SA

sa = SA(func=demo_func, x0=[1, 1, 1], T_max=1, T_min=1e-9, L=300, max_stay_counter=150)
best_x, best_y = sa.run()
print('best_x:', best_x, 'best_y', best_y)

Step3: Plot the result
-> Demo code: examples/demo_sa.py#s3

import matplotlib.pyplot as plt
import pandas as pd

plt.plot(pd.DataFrame(sa.best_y_history).cummin(axis=0))
plt.show()

sa

Moreover, scikit-opt provide 3 types of Simulated Annealing: Fast, Boltzmann, Cauchy. See more sa

4.2 SA for TSP

Step1: oh, yes, define your problems. To boring to copy this step.

Step2: DO SA for TSP
-> Demo code: examples/demo_sa_tsp.py#s2

from sko.SA import SA_TSP

sa_tsp = SA_TSP(func=cal_total_distance, x0=range(num_points), T_max=100, T_min=1, L=10 * num_points)

best_points, best_distance = sa_tsp.run()
print(best_points, best_distance, cal_total_distance(best_points))

Step3: plot the result
-> Demo code: examples/demo_sa_tsp.py#s3

from matplotlib.ticker import FormatStrFormatter

fig, ax = plt.subplots(1, 2)

best_points_ = np.concatenate([best_points, [best_points[0]]])
best_points_coordinate = points_coordinate[best_points_, :]
ax[0].plot(sa_tsp.best_y_history)
ax[0].set_xlabel("Iteration")
ax[0].set_ylabel("Distance")
ax[1].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1],
           marker='o', markerfacecolor='b', color='c', linestyle='-')
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%.3f'))
ax[1].yaxis.set_major_formatter(FormatStrFormatter('%.3f'))
ax[1].set_xlabel("Longitude")
ax[1].set_ylabel("Latitude")
plt.show()

sa

More: Plot the animation:

sa
see examples/demo_sa_tsp.py

5. ACA (Ant Colony Algorithm) for tsp

-> Demo code: examples/demo_aca_tsp.py#s2

from sko.ACA import ACA_TSP

aca = ACA_TSP(func=cal_total_distance, n_dim=num_points,
              size_pop=50, max_iter=200,
              distance_matrix=distance_matrix)

best_x, best_y = aca.run()

ACA

6. immune algorithm (IA)

-> Demo code: examples/demo_ia.py#s2

from sko.IA import IA_TSP

ia_tsp = IA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=500, max_iter=800, prob_mut=0.2,
                T=0.7, alpha=0.95)
best_points, best_distance = ia_tsp.run()
print('best routine:', best_points, 'best_distance:', best_distance)

IA

7. Artificial Fish Swarm Algorithm (AFSA)

-> Demo code: examples/demo_afsa.py#s1

def func(x):
    x1, x2 = x
    return 1 / x1 ** 2 + x1 ** 2 + 1 / x2 ** 2 + x2 ** 2


from sko.AFSA import AFSA

afsa = AFSA(func, n_dim=2, size_pop=50, max_iter=300,
            max_try_num=100, step=0.5, visual=0.3,
            q=0.98, delta=0.5)
best_x, best_y = afsa.run()
print(best_x, best_y)
Comments
  • GA中dim不能为1?

    GA中dim不能为1?

    您好,我对您给出的demo,做了一些改动,把自变量个数设为1。

    def schaffer(p):
        x = p
        # return 0.5 + (np.sin(x) - 0.5) / np.square(1 + 0.001 * x)
        return x + 10 * np.sin(5 * x) + 7 * np.cos(4 * x)
    
     ga = GA(func=schaffer, n_dim=1, size_pop=50, max_iter=800, lb=[-1], ub=[1], precision=1e-7)
    

    但是提示报错:

    Traceback (most recent call last): File "...", line 25, in best_x, best_y = ga.run() File "...", line 80, in run self.crossover() File "...\venv\lib\site-packages\sko\operators\crossover.py", line 44, in crossover_2point_bit Chrom1 ^= mask2 ValueError: non-broadcastable output operand with shape (25,1,25) doesn't match the broadcast shape (25,25,25)

    我理解的schaffer是用来定义自己的目标函数的这样对吗? 那么为什么dim不能设为1呢? 另外您给的例子中

        x1, x2 = p
        x = np.square(x1) + np.square(x2)
        return 0.5 + (np.sin(x) - 0.5) / np.square(1 + 0.001 * x)
    

    如果x1,x2为自变量的话,为什么要取一个x为中间量? 仅仅是为了方便书写吗? 感谢!

    question wontfix 
    opened by zhangxiao123qqq 13
  • 约束关系没有发挥作用

    约束关系没有发挥作用

    非常感谢您的及时回答。 您好,我使用demo中约束关系列表为差分算法施加了约束关系,但是计算出来的结果好像约束关系没有发挥作用。 constraint_ueq = [] for i in range(16): for j in range(15): constraint_ueq.append(lambda x: x[0 + j+i16] - x[1 + j+i16]) for i in range(16): for j in range(15): constraint_ueq.append(lambda x:x[0+j16+i] - x[16+j16+i]) for i in range(6): constraint_ueq.append(lambda x:x[256+i] - x[257+i]) for i in range(11): constraint_ueq.append(lambda x:x[264+i] - x[263+i]) 我在做一个汽车系统的自动标定,涉及到一个256的曲面和一个7的曲线,12的曲线。 曲面自身和曲线自身有平顺性的要求(沿x轴和y轴递增),所以我用以上关系施加了不等式约束,但是优化出来的结果,好像没有反应出这种约束关系。我查看了差分算法的源代码,使用的是罚函数做约束施加?代码如下? if not self.has_constraint: self.Y = self.Y_raw else: # constraint penalty_eq = np.array([np.sum(np.abs([c_i(x) for c_i in self.constraint_eq])) for x in self.X]) penalty_ueq = np.array([np.sum(np.abs([max(0, c_i(x)) for c_i in self.constraint_ueq])) for x in self.X]) self.Y = self.Y_raw + 1e5 * penalty_eq + 1e5 * penalty_ueq 请问如何让我的约束关系在算法的种群生成,变异、交叉、选择中发挥作用? 非常感谢您的及时回答。

    question resolved 
    opened by MaCshan 11
  • 遗传算法整数规划上下界出错

    遗传算法整数规划上下界出错

    在我的程序中,一共有15个变量,整数规划取值范围是从0~1363,因此我设置lb与ub未0、1363,但是我发现程序会生成大于1363的数字,例如生成了 IndexError: index 1901 is out of bounds for axis 0 with size 1363 但是我发现只要把ub边界设置小于1000,就没有问题

    from sko.GA import GA
    K=15
    ga = GA(func=cost_func, n_dim=K, max_iter=500, lb=[0 for i in range(K)], ub=[1363 for i in range(K)], precision=[1 for i in range(K)])
    rs = ga.run()
    print(rs)
    
    bug resolved 
    opened by phoenixsfly 8
  • capacitated vrp with time windows

    capacitated vrp with time windows

    Hi, thanks for the package with multiple metaheuristics which can be implemented with ease. I had a doubt whether we can define our own Vehicle routing problems with constraints like time windows and use this package. Or we have to define our own custom functions? Thanks!

    question resolved 
    opened by ameya1995 8
  • Parallelizing the run of function passed into GA constructor

    Parallelizing the run of function passed into GA constructor

    Hi,

    I would like to use GA for statsmodels sarimax function. I was wondering, how would it be possible to parallelize the fitting process of each individual in GA using torch. Crossovers, selections are parallelized however the function part is left out for such functionality. I would be glad if you can help me.

    ` import torch import numpy as np import pandas as pd from scipy.stats import norm import statsmodels.api as sm import matplotlib.pyplot as plt from datetime import datetime import requests from io import BytesIO

    # Dataset
    wpi1 = requests.get('https://www.stata-press.com/data/r12/wpi1.dta').content
    data = pd.read_stata(BytesIO(wpi1))
    data.index = data.t
    # Set the frequency
    data.index.freq="QS-OCT"
    
    def arima(X):
          X = [int(x) for x in X]
          order = [X[0],X[1],X[2]]
    
          mod = sm.tsa.statespace.SARIMAX(data['wpi'], 
                                          order=order, 
                                          seasonal_order = [X[3],X[4],X[5],6],
                                          enforce_stationarity=False,
                                          enforce_invertibility=False
                                          )
          res = mod.fit(disp=False)
          #print(X)
          #print(res.aic)
          return res.aic
    
    
    
    ga = GA(func=arima, n_dim=6, size_pop=50, max_iter=50, lb=[0,0,0,0,0,0], ub=[6,4,6,4,3,4], precision=1)
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    
    ga.to(device=device)
    
    ga.run()
    

    `

    feature request 
    opened by koralturkk 7
  • 请问整数规划该怎么写?

    请问整数规划该怎么写?

    demo_func=lambda x: x[0]**2 + x[1]**2 + x[2]**2
    ga = GA(func=demo_func,n_dim=3, max_iter=500, lb=[-1, -10, -5], ub=[2, 10, 2])
    best_x, best_y = ga.fit()
    

    如果 x[0] 只能取整数, 代码应该怎么写? 谢谢

    question resolved 
    opened by bjmwang 7
  • 如何在第一层搜索里面再嵌套一层局部搜索?

    如何在第一层搜索里面再嵌套一层局部搜索?

    我外层搜索的目标函数是这样写的,我需要根据ant.step4()得到的结果(存储在ant的属性当中)再进行一次搜索,请问这个应该如何去写呢? def obj_func(x): jobs=read_data(Const.filename)#读取作业信息 ant = Antibody(jobs, x)#创建抗体实例 ant.step1()#按照随机键进行分组 ant.step2()#根据分组计算发车时间 ant.step3() ant.step4() #ant_a=Antibody_a(ant) ant.step5() ant.cal_obj()#计算该抗体目标函数值 return ant.obj

    opened by cliff0149 6
  • 请问怎么把目标函数写在对象里面

    请问怎么把目标函数写在对象里面

    我希望把目标函数放在对象里面,但是参数部分写入self不行,我需要传入一些成员变量到目标函数里面的对象进行计算。请问应该怎么写。

    本人的目标函数是这样写的

        def cal_score(self,p):
            print(p)
            week_or_month, frequency_week, start_week, start_month_day, target_profit, sell_share = p
            player = Player("bot", self.money, self.funds_objects)
            fund_ids = self.funds_objects.keys()
            for fund_id in fund_ids:
                player.simulation(fund_id, self.input_start_time, self.input_end_time, self.week_or_month, self.frequency_week, self.target_profit,self.sell_share, self.start_week, self.start_month_day)
            player.caculate_score(self.target_profit)
            score = player.score
            average_yield = player.average_yield
            print("score", score)
            print("average_yield", average_yield)
            return -score
    

    希望可以兼容self参数,这样可以更容易放到类里面使用

    question resolved 
    opened by qiujunhan 5
  • 关于参数传递

    关于参数传递

    官方的示例是这样的, def demo_func(x): x1, x2, x3 = x return x1**2+x2*1+x3 如果我的目标函数是类似于这样的: def demo_func(x,model,y): x1,x2,x3=x model,y 是我要传递的其他参数,请问该怎么写 PSO(func=demo_func, dim=3, pop=40, max_iter=150, lb=[0, -1, 0.5], ub=[1, 1, 1], w=0.8, c1=0.5, c2=0.5) 这个里面的demo_func.

    question resolved 
    opened by hadlang 5
  • 遗传算法解决TSP问题需要改进

    遗传算法解决TSP问题需要改进

    使用scikit-opt的遗传算法模型解决TSP问题时,效果好像不理想,比如stplib数据库里面的att48.tsp时,跑出来的结果跟随机的顺序差不多,att48的链接如下:http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsp/att48.tsp 我的参数设置为: ga_tsp = GA_TSP(func=cal_total_distance, n_dim=num_points, size_pop=300, max_iter=3000,prob_mut=0.1) 谢谢

    bug contributor resolved 
    opened by YoshiPark 5
  • ValueError: operands could not be broadcast together with shapes (400,1) (20,30) (20,30)

    ValueError: operands could not be broadcast together with shapes (400,1) (20,30) (20,30)

    我在算法内部又添加一层搜索之后就出现了这个报错,但是报错的是外层的搜索而不是内层的。问题是在selection这里,似乎是说对于np.where函数,条件矩阵和后面的矩阵维数不同。但是我将条件矩阵打印出来发现它就是一个(20,1)的矩阵,不知道为什么程序认定它为(400,1) [[ True] [ True] [ True] [ True] [ True] [False] [False] [ True] [ True] [False] [ True] [False] [ True] [ True] [False] [False] [ True] [ True] [False] [ True]] 迭代次数 0 最优值 [27.6] Traceback (most recent call last): File "D:/Desktop/半集成式VRPSDP/代码/main.py", line 121, in best_x, best_y = de.run() File "D:\841370\anaconda\lib\site-packages\sko\DE.py", line 86, in run self.selection() File "D:\841370\anaconda\lib\site-packages\sko\DE.py", line 76, in selection self.X = np.where((f_X < f_U).reshape(-1, 1), X, U) File "<array_function internals>", line 5, in where ValueError: operands could not be broadcast together with shapes (400,1) (20,30) (20,30)

    此外还有一个报错VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return np.array([func_cached(tuple(x)) for x in X])没有找到原因。

    opened by cliff0149 4
  • Allow users to set number of process pool workers/threads

    Allow users to set number of process pool workers/threads

    Would be nice to see the users be able to control the number of processors/threads used when defining 'multiprocessing' or 'multithreading'.

    I would think you could pass in a worker argument (default can be None) to set_run_mode(), something like:

    set_run_mode(obj_func, 'multiprocessing', n_workers=3)

    ..and then further down in the func_transformer() function:

        elif mode == 'multiprocessing':
            from multiprocessing import Pool
            
            pool = Pool()
            if n_workers != None:
                pool = Pool(n_workers)
    
            def func_transformed(X):
                return np.array(pool.map(func, X))
    

    I'm not sure if this would be the appropriate or working syntax, but just an idea I had. Sometimes we don't want to just default to using ALL CPU cores when using multiprocessing.

    opened by windowshopr 0
  • 没有连续域蚁群算法的实现

    没有连续域蚁群算法的实现

    蚁群算法似乎只实现了TSP问题的接口,没有实现连续域优化的方法,关于连续域的蚁群优化算法论文见参考文献

    reference: Socha K, Dorigo M. Ant colony optimization for continuous domains[J]. European journal of operational research, 2008, 185(3): 1155-1173.

    opened by lewis738 0
  • Significantly Sub-optimal Results from SA_TSP

    Significantly Sub-optimal Results from SA_TSP

    Using code from the SA_TSP example at: https://github.com/guofei9987/scikit-opt/blob/master/examples/demo_sa_tsp.py I am receiving significantly sub-optimal results.

    import numpy as np
    from sko.SA import SA_TSP
    from scipy import spatial
    import matplotlib.pyplot as plt
    from matplotlib.ticker import FormatStrFormatter
    
    num_points = 20
    points_coordinate = np.random.rand(num_points, 2)
    
    # using SA_TSP example code
    num_points = points_coordinate.shape[0]
    distance_matrix = spatial.distance.cdist(points_coordinate, points_coordinate, metric='euclidean')
    
    def cal_total_distance(routine):
        '''The objective function. input routine, return total distance.
        cal_total_distance(np.arange(num_points))
        '''
        num_points, = routine.shape
        return sum([distance_matrix[routine[i % num_points], routine[(i + 1) % num_points]] for i in range(num_points)])
    
    sa_tsp = SA_TSP(func=cal_total_distance, x0=range(num_points), T_max=100, T_min=1, L=10 * num_points)
    best_points, best_distance = sa_tsp.run()
    
    # Plot both solutions
    fig, ax = plt.subplots(1, 2)
    
    best_points_ = np.concatenate([best_points, [best_points[0]]])
    best_points_coordinate = points_coordinate[best_points_, :]
    
    ax[0].plot(sa_tsp.best_y_history)
    ax[0].set_xlabel("Iteration")
    ax[0].set_ylabel("Distance")
    ax[0].set_title('Optimization')
    ax[1].plot(best_points_coordinate[:, 0], best_points_coordinate[:, 1],
               marker='o', markerfacecolor='b', color='c', linestyle='-')
    ax[1].xaxis.set_major_formatter(FormatStrFormatter('%.3f'))
    ax[1].yaxis.set_major_formatter(FormatStrFormatter('%.3f'))
    ax[1].set_xlabel("X")
    ax[1].set_ylabel("Y")
    ax[1].set_title('Coordinates')
    
    plt.show()
    

    image

    opened by johnziebro 2
Releases(v0.6.5)
Implemented fully documented Particle Swarm Optimization algorithm (basic model with few advanced features) using Python programming language

Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social learning coefficients and maximum velocity of the particle.

null 9 Nov 29, 2022
A research toolkit for particle swarm optimization in Python

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practit

Lj Miranda 1k Dec 30, 2022
An open source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+. Including offline map and navigation.

Pi Zero Bikecomputer An open-source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+ https://github.com/hishizuka/pizero_bikecompute

hishizuka 264 Jan 2, 2023
School of Artificial Intelligence at the Nanjing University (NJU)School of Artificial Intelligence at the Nanjing University (NJU)

F-Principle This is an exercise problem of the digital signal processing (DSP) course at School of Artificial Intelligence at the Nanjing University (

Thyrix 5 Nov 23, 2022
A simple and lightweight genetic algorithm for optimization of any machine learning model

geneticml This package contains a simple and lightweight genetic algorithm for optimization of any machine learning model. Installation Use pip to ins

Allan Barcelos 8 Aug 10, 2022
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
AdamW optimizer and cosine learning rate annealing with restarts

AdamW optimizer and cosine learning rate annealing with restarts This repository contains an implementation of AdamW optimization algorithm and cosine

Maksym Pyrozhok 133 Dec 20, 2022
GEP (GDB Enhanced Prompt) - a GDB plug-in for GDB command prompt with fzf history search, fish-like autosuggestions, auto-completion with floating window, partial string matching in history, and more!

GEP (GDB Enhanced Prompt) GEP (GDB Enhanced Prompt) is a GDB plug-in which make your GDB command prompt more convenient and flexibility. Why I need th

Alan Li 23 Dec 21, 2022
Cosine Annealing With Warmup

CosineAnnealingWithWarmup Formulation The learning rate is annealed using a cosine schedule over the course of learning of n_total total steps with an

zhuyun 4 Apr 18, 2022
Codes for AAAI22 paper "Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum"

Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a

null 14 Sep 30, 2022
PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

PyGAD: Genetic Algorithm in Python PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine lear

Ahmed Gad 1.1k Dec 26, 2022
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
Grow Function: Generate 3D Stacked Bifurcating Double Deep Cellular Automata based organisms which differentiate using a Genetic Algorithm...

Grow Function: A 3D Stacked Bifurcating Double Deep Cellular Automata which differentiates using a Genetic Algorithm... TLDR;High Def Trees that you can mint as NFTs on Solana

Nathaniel Gibson 4 Oct 8, 2022
Nicholas Lee 3 Jan 9, 2022
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
Pythonic particle-based (super-droplet) warm-rain/aqueous-chemistry cloud microphysics package with box, parcel & 1D/2D prescribed-flow examples in Python, Julia and Matlab

PySDM PySDM is a package for simulating the dynamics of population of particles. It is intended to serve as a building block for simulation systems mo

Atmospheric Cloud Simulation Group @ Jagiellonian University 32 Oct 18, 2022
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 1, 2023