This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm.

Overview

Hybrid-Self-Attention-NEAT

Abstract

This repository contains the code to reproduce the results presented in the original paper.
In this article, we present a “Hybrid Self-Attention NEAT” method to improve the original NeuroEvolution of Augmenting Topologies (NEAT) algorithm in high-dimensional inputs. Although the NEAT algorithm has shown a significant result in different challenging tasks, as input representations are high dimensional, it cannot create a well-tuned network. Our study addresses this limitation by using self-attention as an indirect encoding method to select the most important parts of the input. In addition, we improve its overall performance with the help of a hybrid method to evolve the final network weights. The main conclusion is that Hybrid Self-Attention NEAT can eliminate the restriction of the original NEAT. The results indicate that in comparison with evolutionary algorithms, our model can get comparable scores in Atari games with raw pixels input with a much lower number of parameters.

NOTE: The original implementation of self-attention for atari-games, and the NEAT algorithm can be found here:
Neuroevolution of Self-Interpretable Agents: https://github.com/google/brain-tokyo-workshop/tree/master/AttentionAgent
Pure python library for the NEAT and other variations: https://github.com/ukuleleplayer/pureples

Execution

To use this work on your researches or projects you need:

  • Python 3.7
  • Python packages listed in requirements.txt

NOTE: The following commands are based on Ubuntu 20.04

To install Python:

First, check if you already have it installed or not.

python3 --version

If you don't have python 3.7 in your computer you can use the code below:

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.7
sudo apt install python3.7-distutils

To install packages via pip install:

python3.7 -m pip install -r requirements.txt

To run this project on Ubuntu server:

You need to uncomment the following lines in experiments/configs/configs.py

_display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))
_display.start()

And also install some system dependencies as well

apt-get install -y xvfb x11-utils

To train the model:

  • First, check the configuration you need. The default ones are listed in experiments/configs/.
  • We highly recommend increasing the number of population size, and the number of iterations to get better results.
  • Check the working directory to be: ~/Hybrid_Self_Attention_NEAT/
  • Run the runner.py as below:
python3.7 -m experiment.runner

NOTE: If you have limited resources (like RAM), you should decrease the number of iterations and instead use loops command

for i in {1..
   
    }; do python3.7 -m experiment.runner; done

   

To tune the model:

  • First, check you trained the model, and the model successfully saved in experiments/ as main_model.pkl
  • Run the tunner.py as below:
python3.7 -m experiment.tunner

NOTE: If you have limited resources (like RAM), you should decrease the number of iterations and instead use loops command

for i in {1..
   
    }; do python3.7 -m experiment.tunner; done

   

Citation

For attribution in academic contexts, please cite this work as:

@misc{khamesian2021hybrid,
    title           = {Hybrid Self-Attention NEAT: A novel evolutionary approach to improve the NEAT algorithm}, 
    author          = {Saman Khamesian and Hamed Malek},
    year            = {2021},
    eprint          = {2112.03670},
    archivePrefix   = {arXiv},
    primaryClass    = {cs.NE}
}
You might also like...
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention

cosFormer Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention Update log 2022/2/28 Add core code License This

Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

Official implementation of NeurIPS 2021 paper
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

Comments
  • Runner not defined

    Runner not defined

    Hi, I have tested this out in Colab and had no issues, however, when trying on my windows machine in anaconda I am getting the error ‘runner is not defined’

    is this not designed for Windows or am I missing something?

    Thanks

    opened by robjlyons 2
Owner
Saman Khamesian
Data Science Specialist at Mofid Securities
Saman Khamesian
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

null 254 Jan 2, 2023
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

mandos 43 Dec 7, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

Phil Wang 180 Jan 5, 2023
Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction

Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction Requirements The code has been tested running under Python 3.7.4, with the foll

zshicode 84 Jan 1, 2023
[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network

Attention Helps CNN See Better: Hybrid Image Quality Assessment Network [CVPRW 2022] Code for Hybrid Image Quality Assessment Network [paper] [code] T

IIGROUP 49 Dec 11, 2022
A simple pygame dino game which can also be trained and played by a NEAT KI

Dino Game AI Game The game itself was developed with the Pygame module pip install pygame You can also play it yourself by making the dino jump with t

Kilian Kier 7 Dec 5, 2022
Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python

FlappyAI Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python Everything Used Genetic Algorithm especially NEAT conce

Eryawan Presma Y. 2 Mar 24, 2022