Chess reinforcement learning by AlphaGo Zero methods.

Overview

Binder Demo Notebook

About

Chess reinforcement learning by AlphaGo Zero methods.

This project is based on these main resources:

  1. DeepMind's Oct 19th publication: Mastering the Game of Go without Human Knowledge.
  2. The great Reversi development of the DeepMind ideas that @mokemokechicken did in his repo: https://github.com/mokemokechicken/reversi-alpha-zero
  3. DeepMind just released a new version of AlphaGo Zero (named now AlphaZero) where they master chess from scratch: https://arxiv.org/pdf/1712.01815.pdf. In fact, in chess AlphaZero outperformed Stockfish after just 4 hours (300k steps) Wow!

See the wiki for more details.

Note

I'm the creator of this repo. I (and some others collaborators did our best: https://github.com/Zeta36/chess-alpha-zero/graphs/contributors) but we found the self-play is too much costed for an only machine. Supervised learning worked fine but we never try the self-play by itself.

Anyway I want to mention we have moved to a new repo where lot of people is working in a distributed version of AZ for chess (MCTS in C++): https://github.com/glinscott/leela-chess

Project is almost done and everybody will be able to participate just by executing a pre-compiled windows (or Linux) application. A really great job and effort has been done is this project and I'm pretty sure we'll be able to simulate the DeepMind results in not too long time of distributed cooperation.

So, I ask everybody that wish to see a UCI engine running a neural network to beat Stockfish go into that repo and help with his machine power.

Environment

  • Python 3.6.3
  • tensorflow-gpu: 1.3.0
  • Keras: 2.0.8

New results (after a great number of modifications due to @Akababa)

Using supervised learning on about 10k games, I trained a model (7 residual blocks of 256 filters) to a guesstimate of 1200 elo with 1200 sims/move. One of the strengths of MCTS is it scales quite well with computing power.

Here you can see an example where I (black) played against the model in the repo (white):

img

Here you can see an example of a game where I (white, ~2000 elo) played against the model in this repo (black):

img

First "good" results

Using the new supervised learning step I created, I've been able to train a model to the point that seems to be learning the openings of chess. Also it seems the model starts to avoid losing naively pieces.

Here you can see an example of a game played for me against this model (AI plays black):

partida1

Here we have a game trained by @bame55 (AI plays white):

partida3

This model plays in this way after only 5 epoch iterations of the 'opt' worker, the 'eval' worker changed 4 times the best model (4 of 5). At this moment the loss of the 'opt' worker is 5.1 (and still seems to be converging very well).

Modules

Supervised Learning

I've done a supervised learning new pipeline step (to use those human games files "PGN" we can find in internet as play-data generator). This SL step was also used in the first and original version of AlphaGo and maybe chess is a some complex game that we have to pre-train first the policy model before starting the self-play process (i.e., maybe chess is too much complicated for a self training alone).

To use the new SL process is as simple as running in the beginning instead of the worker "self" the new worker "sl". Once the model converges enough with SL play-data we just stop the worker "sl" and start the worker "self" so the model will start improving now due to self-play data.

python src/chess_zero/run.py sl

If you want to use this new SL step you will have to download big PGN files (chess files) and paste them into the data/play_data folder (FICS is a good source of data). You can also use the SCID program to filter by headers like player ELO, game result and more.

To avoid overfitting, I recommend using data sets of at least 3000 games and running at most 3-4 epochs.

Reinforcement Learning

This AlphaGo Zero implementation consists of three workers: self, opt and eval.

  • self is Self-Play to generate training data by self-play using BestModel.
  • opt is Trainer to train model, and generate next-generation models.
  • eval is Evaluator to evaluate whether the next-generation model is better than BestModel. If better, replace BestModel.

Distributed Training

Now it's possible to train the model in a distributed way. The only thing needed is to use the new parameter:

  • --type distributed: use mini config for testing, (see src/chess_zero/configs/distributed.py)

So, in order to contribute to the distributed team you just need to run the three workers locally like this:

python src/chess_zero/run.py self --type distributed (or python src/chess_zero/run.py sl --type distributed)
python src/chess_zero/run.py opt --type distributed
python src/chess_zero/run.py eval --type distributed

GUI

  • uci launches the Universal Chess Interface, for use in a GUI.

To set up ChessZero with a GUI, point it to C0uci.bat (or rename to .sh). For example, this is screenshot of the random model using Arena's self-play feature: capture

Data

  • data/model/model_best_*: BestModel.
  • data/model/next_generation/*: next-generation models.
  • data/play_data/play_*.json: generated training data.
  • logs/main.log: log file.

If you want to train the model from the beginning, delete the above directories.

How to use

Setup

install libraries

pip install -r requirements.txt

If you want to use GPU, follow these instructions to install with pip3.

Make sure Keras is using Tensorflow and you have Python 3.6.3+. Depending on your environment, you may have to run python3/pip3 instead of python/pip.

Basic Usage

For training model, execute Self-Play, Trainer and Evaluator.

Note: Make sure you are running the scripts from the top-level directory of this repo, i.e. python src/chess_zero/run.py opt, not python run.py opt.

Self-Play

python src/chess_zero/run.py self

When executed, Self-Play will start using BestModel. If the BestModel does not exist, new random model will be created and become BestModel.

options

  • --new: create new BestModel
  • --type mini: use mini config for testing, (see src/chess_zero/configs/mini.py)

Trainer

python src/chess_zero/run.py opt

When executed, Training will start. A base model will be loaded from latest saved next-generation model. If not existed, BestModel is used. Trained model will be saved every epoch.

options

  • --type mini: use mini config for testing, (see src/chess_zero/configs/mini.py)
  • --total-step: specify total step(mini-batch) numbers. The total step affects learning rate of training.

Evaluator

python src/chess_zero/run.py eval

When executed, Evaluation will start. It evaluates BestModel and the latest next-generation model by playing about 200 games. If next-generation model wins, it becomes BestModel.

options

  • --type mini: use mini config for testing, (see src/chess_zero/configs/mini.py)

Tips and Memory

GPU Memory

Usually the lack of memory cause warnings, not error. If error happens, try to change vram_frac in src/configs/mini.py,

self.vram_frac = 1.0

Smaller batch_size will reduce memory usage of opt. Try to change TrainerConfig#batch_size in MiniConfig.

Comments
  • Slfast2

    Slfast2

    Faster SL worker for parsing large PGN files. It simply iterates through and adds one time every game in every pgn in play_data folder, instead of choosing one at random like in online learning.

    opened by Akababa 14
  • Running Alpha Zero

    Running Alpha Zero

    Whenever I run AlphaZero chess for the second time after reinstalling python I get the error

    File "h5py\h5f.pyx", line 78, in h5py.h5f.open OSError: Unable to open file (File signature not found)

    How do I avoid getting that error. When I run the command src\ chess_zero\run.py self --distributed and want to stop execution I type ctrl-c. How else do I stop the command without getting the above error? Thanks.

    Philip

    opened by philipstephens 10
  • Question... Is this possible  to replace Tensorflow with Microsoft CNTK?

    Question... Is this possible to replace Tensorflow with Microsoft CNTK?

    Great stuff! I got work in both Windows and Ubuntu... Now I get a question to ask, Since it write in Keras which should should support both Google Tensorflow and MS CNTK. Is any simple way to replace Tensorflow with CNTK?

    opened by xyhan88 7
  • uploaded model's config and weight files

    uploaded model's config and weight files

    I am trying to train a network closed to the uploaded one, for getting some experience.

    I have used the config of the uploaded one,downloaded games from ficsgames.org and made some attempts. But I can't train a network good enough.

    Can you share your training data? Without them, I can't know if I do something wrong or I just use wrong games :(

    opened by ileile 6
  • New best weights after supervised learning from large pgn files using optimize with generator

    New best weights after supervised learning from large pgn files using optimize with generator

    Trained for about four days using supervised learning feeding large pgn files (CCRL) directly to new version of optimize.py with fit_generator.

    Did several passes reducing learning rate from .01 to .001 and finally .0001
    Used batch size of 2,048 and increased number of steps to 100,000 for LR .0001 (that took 3 days).

    Win rate v prior best weights 96.6% after 29 games.

    opened by brianprichardson 6
  • logger isn't substituting values

    logger isn't substituting values

    For example this is what I get: 2017-12-13 09:38:11,484@chess_zero.agent.model_chess DEBUG # model files does not exist at {config_path} and {weight_path} 2017-12-13 09:38:12,425@chess_zero.agent.model_chess DEBUG # save model to {config_path} 2017-12-13 09:38:13,099@chess_zero.agent.model_chess DEBUG # saved model digest {self.digest}

    IDK if I'm missing something here, but I don't think logger has access to the calling scope.

    opened by Akababa 5
  • Move history (8 half moves) input request

    Move history (8 half moves) input request

    @Akababa and @benediamond and @anyothers with comments or suggestions: I noticed some older efforts in your forks with the 8 (half) move history AZ input idea. After considerable, but far from exhaustive, testing with only the current board as input, I am starting to think that some move history is critical. LCZero uses it with a far smaller (64x6) NN than I have been training (256x7), and my net still drops material carelessly. This is after several hundred thousand supervised games.

    Anyway, I may be missing other larger picture issues (random self-play works better than supervised?), but I would like try supervised training with some move history, perhaps starting with 4 half moves.

    Accordingly, could you please point me to your "best" versions. One was with 110 and another with 105 (96+5), IIRC. I plan to just graft the input back on to the relatively stable Zeta master that I have been fiddling with to use pgn input.

    Thanks, Brian

    opened by brianprichardson 4
  • SyntaxError

    SyntaxError

    Traceback (most recent call last): File "src/chess_zero/run.py", line 17, in from chess_zero import manager File "src/chess_zero/manager.py", line 22 def setup(config: Config, args): ^ SyntaxError: invalid syntax

    opened by Linus789 4
  • Not bugs, just questions about designing.

    Not bugs, just questions about designing.

    Hi, Zeta36!

    I am trying to construct an alpha-zero-style AI for Twelve Shogi, a simple kind of Shogi. I am wondering that how you designed the board state for CNN input.

    Thanks.

    opened by PlanetMoon 3
  • No module named chess

    No module named chess

    @Zeta36

    When I run python src/chess_zero/run.py self under root, it says:

    Traceback (most recent call last):
      File "src/chess_zero/run.py", line 16, in <module>
        from chess_zero import manager
      File "src/chess_zero/manager.py", line 6, in <module>
        from .config import Config
      File "src/chess_zero/config.py", line 2, in <module>
        import chess
    ModuleNotFoundError: No module named 'chess'
    

    Is there any python module named chess?

    opened by yhyu13 3
  • Module 'keras.backend' has no attribute 'observe_object_name'

    Module 'keras.backend' has no attribute 'observe_object_name'

    Error while running run.py --cmd self

    Using TensorFlow backend.
    2021-04-14 13:18:34,299@chess_zero.agent.model_chess DEBUG # loading model from C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\data\model\model_best_config.json
    Traceback (most recent call last):
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\run.py", line 20, in <module>
        manager.start()
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\manager.py", line 64, in start
        return self_play.start(config)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\worker\self_play.py", line 25, in start
        return SelfPlayWorker(config).start()
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\worker\self_play.py", line 45, in __init__
        self.current_model = self.load_model()
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\worker\self_play.py", line 85, in load_model
        if self.config.opts.new or not load_best_model_weight(model):
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\lib\model_helper.py", line 15, in load_best_model_weight
        return model.load(model.config.resource.model_best_config_path, model.config.resource.model_best_weight_path)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\src\chess_zero\agent\model_chess.py", line 145, in load
        self.model = Model.from_config(json.load(f))
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\training.py", line 2332, in from_config
        functional.reconstruct_from_config(config, custom_objects))
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\functional.py", line 1274, in reconstruct_from_config
        process_layer(layer_data)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\functional.py", line 1256, in process_layer
        layer = deserialize_layer(layer_data, custom_objects=custom_objects)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\layers\serialization.py", line 159, in deserialize
        return generic_utils.deserialize_keras_object(
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\utils\generic_utils.py", line 675, in deserialize_keras_object
        deserialized_obj = cls.from_config(cls_config)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\base_layer.py", line 716, in from_config
        return cls(**config)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\input_layer.py", line 152, in __init__
        super(InputLayer, self).__init__(dtype=dtype, name=name)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\tensorflow\python\training\tracking\base.py", line 522, in _method_wrapper
        result = method(self, *args, **kwargs)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\base_layer.py", line 350, in __init__
        self._init_set_name(name)
      File "C:\UTD\Sem 2\Machine Learning\chess-alpha-zero-master\venv\lib\site-packages\keras\engine\base_layer.py", line 2372, in _init_set_name
        backend.observe_object_name(name)
    AttributeError: module 'keras.backend' has no attribute 'observe_object_name'
    
    opened by nikhileshp 2
  • Bump tensorflow-gpu from 1.15.2 to 2.9.3

    Bump tensorflow-gpu from 1.15.2 to 2.9.3

    Bumps tensorflow-gpu from 1.15.2 to 2.9.3.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Compatible with Ampere GPUs?

    Compatible with Ampere GPUs?

    I noticed that the version of Tensorflow required is quite old. I'm not sure about this, but I think Ampere GPUs usually require at least CUDA 11, which Tensorflow 2.4.0 requires. Does anyone know if I can still run this repo on Ampere?

    opened by daquang 0
  • pyperclip EOF error

    pyperclip EOF error

    Traceback (most recent call last): File "src/chess_zero/run.py", line 20, in manager.start() File "src/chess_zero/manager.py", line 64, in start return self_play.start(config) File "src/chess_zero/worker/self_play.py", line 25, in start return SelfPlayWorker(config).start() File "src/chess_zero/worker/self_play.py", line 69, in start pretty_print(env, ("current_model", "current_model")) File "src/chess_zero/lib/data_helper.py", line 26, in pretty_print pyperclip.copy(env.board.fen()) File "/usr/local/lib/python3.6/dist-packages/pyperclip/init.py", line 659, in lazy_load_stub_copy return copy(text) File "/usr/local/lib/python3.6/dist-packages/pyperclip/init.py", line 336, in call raise PyperclipException(EXCEPT_MSG) pyperclip.PyperclipException: Pyperclip could not find a copy/paste mechanism for your system. For more information, please visit https://pyperclip.readthedocs.io/en/latest/index.html#not-implemented-error Exception in thread prediction_worker: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "src/chess_zero/agent/api_chess.py", line 63, in _predict_batch_worker data.append(pipe.recv()) File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv buf = self._recv_bytes() File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv raise EOFError EOFError

    opened by SwiftGodDev 1
  • Takes 30s to 40s per move!! and Why does it play only one opening as white?

    Takes 30s to 40s per move!! and Why does it play only one opening as white?

    The engine takes around 30s to 40s per move. It also plays just one opening(English Opening) as white. Is there any way to decrease the per move time? Any and all help would be appreciated.

    opened by SecretProgrammer10 3
  • Change the command for Supervised Learning in README.md

    Change the command for Supervised Learning in README.md

    Hello all,

    Just git cloned the project and tried to play with it, the command for Supervised Learning is not python src/chess_zero/run.py sl

    but

    python src/chess_zero/run.py --cmd sl

    else the command "sl" is not recognized

    opened by mphuget 0
Owner
Samuel
Samuel
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Zhengzhong Tu 5 Sep 16, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
Python Implementation of Chess Playing AI with variable difficulty

Chess AI with variable difficulty level implemented using the MiniMax AB-Pruning Algorithm

Ali Imran 7 Feb 20, 2022
Neural network chess engine trained on Gary Kasparov's games.

Neural Chess It's not the best chess engine, but it is a chess engine. Proof of concept neural network chess engine (feed-forward multi-layer perceptr

null 3 Jun 22, 2022
Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

Daniel Bourke 3.4k Jan 7, 2023
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

null 28 Aug 29, 2022
Shared Attention for Multi-label Zero-shot Learning

Shared Attention for Multi-label Zero-shot Learning Overview This repository contains the implementation of Shared Attention for Multi-label Zero-shot

dathuynh 26 Dec 14, 2022
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

Jacob Gildenblat 196 Nov 27, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

Ruiqi Zhong 42 Nov 3, 2022
ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representation from common sense knowledge graphs.

ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representa

Bats Research 94 Nov 21, 2022
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python >= 3.7.10 Pytorch == 1.7

null 1 Nov 19, 2021
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"

Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds Björn Michele1), Alexandre Boulch1), Gilles Puy1), Maxime Bucher1) and Rena

valeo.ai 15 Dec 22, 2022