JORLDY an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise

Related tags

Deep Learning JORLDY
Overview

JORLDY (Beta)

license badge

Hello WoRLd!! Join Our Reinforcement Learning framework for Developing Yours (JORLDY) is an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise. It is named after Jordy, one of the Kakao Niniz character. It provides various RL algorithms and environment and they can be easily used using single code. This repository is opened for helping RL researchers and students who study RL.

🔥 Features

  • 20+ RL Algorithms and various RL environment are provided
  • Algorithms and environment are customizable
  • New algorithms are environment can be added
  • Distributed RL algorithms are provided using ray
  • Benchmark of the algorithms is conducted in many RL environment

Notification

Currently, JORLDY is pre-release version. It only supports Linux, but you can use JORLDY with Docker on Windows and Mac. However, you can use only (single, sync_distributed)_train_nomp.py and eval.py on a local environment in Windows and Mac. In WSL, there is an issue with the algorithm using the target network in the script using multiprocessing library. We will address these issues as soon as possible.

* (single, sync_distributed)_train_nomp.py: these scripts don't use multiprocessing library. In detail, the manage process is included in the main process. So it can be a bit slow.

⬇️ Installation

 $ git clone https://github.com/kakaoenterprise/JORLDY.git  
 $ cd JORLDY
 $ pip install -r requirements.txt

 # linux
 $ apt-get update 
 $ apt-get -y install libgl1-mesa-glx # for opencv
 $ apt-get -y install libglib2.0-0    # for opencv
 $ apt-get -y install gifsicle        # for gif optimize

🐳 To use docker

(customize if necessary)

 $ cd JORLDY

 # mac, linux
 $ docker build -t jorldy -f ./docker/Dockerfile .
 $ docker run -it --rm --name jorldy -v `pwd`:/JORLDY jorldy /bin/bash

 # windows
 > docker build -t jorldy -f .\docker\Dockerfile .
 > docker run -it --rm --name jorldy -v %cd%:/JORLDY jorldy /bin/bash

To use additional environments

(atari and super-mario-bros need to be installed manually due to licensing issues)

 # To use atari
 $ pip install --upgrade gym[atari,accept-rom-license]
 
 # To use super-mario-bros
 $ pip install gym-super-mario-bros

🚀 Getting started

$ cd jorldy

# Examples: python [script name] --config [config path]
$ python single_train.py --config config.dqn.cartpole
$ Python single_train.py --config config.rainbow.atari --env.name assault

# Examples: Python [script name] --config [config path] --[optional parameter key] [parameter value]
$ python single_train.py --config config.dqn.cartpole --agent.batch_size 64
$ python sync_distributed_train.py --config config.ppo.cartpole --train.num_worker 8 

🗂️ Release

Version Release Date Source Release Note
0.0.1 November 03, 2021 Source Release Note

🔍 How to

📄 Documentation

👥 Contributors

📫 Contact: [email protected]

contributors

©️ License

Apache License 2.0

🚫 Disclaimer

Installing in JORDY and/or utilizing algorithms or environments not provided KEP may involve a use of third party’s intellectual property. It is advisable that a user obtain licenses or permissions from the right holder(s), if necessary, or take any other necessary measures to avoid infringement or misappropriation of third party’s intellectual property rights.

Comments
  • Ray memory issue when running rnd ppo

    Ray memory issue when running rnd ppo

    Describe the bug Ray memory issue occurred when running rnd ppo on montezuma's revenge of Atari env.

    To Reproduce Run rnd ppo on montezuma's revenge

    Expected behavior Memory issue occurs

    Screenshots 스크린샷 2021-11-29 오후 3 13 47

    Development Env. (OS, version, libraries): Linux Ubuntu, Python 3.8, requirement (jorldy0.0.2)

    Additional context Add any other context about the problem here.

    bug 
    opened by leonard-q 3
  • Modify train files, eval_manager

    Modify train files, eval_manager

    :star2: Hello! Thanks for contributing JORLDY!

    Checklist

    Please check if you consider the following items.

    • [v] My code follows the style guidelines of this project
    • [v] My code follows the naming convention of documentation
    • [v] I have commented my code, particularly in hard-to-understand areas
    • [v] My changes generate no new warnings or errors

    Types of changes

    Bugfix

    Test Configuration

    • OS: Windows 10
    • Python version: 3.8
    • Additional libraries: None

    Description

    • Fixed #44

    The basic idea is that eval_manager in the child process should create its env. For now, distributed_train.py process doesn’t use env after creating agent config.

    opened by zenoengine 3
  • V-MPO atari performance issue

    V-MPO atari performance issue

    I am tried running V-MPO on atari Breakout, and it didn't seem to gain any momentum; Any reason why this might be? I tried changing some of the parameters in the config file and I still didn't get any improvement. Is this how it suppose to be at the beginning of training?

    image

    bug 
    opened by hlsafin 2
  • Leonard/multi modal

    Leonard/multi modal

    :star2: Hello! Thanks for contributing JORLDY!

    Checklist

    Please check if you consider the following items.

    • [v] My code follows the style guidelines of this project
    • [v] My code follows the naming convention of documentation
    • [v] I have commented my code, particularly in hard-to-understand areas
    • [v] My changes generate no new warnings or errors

    Types of changes

    Please describe the types of changes! (ex. Bugfix, New feature, Documentation, ...) New feature

    Test Configuration

    • OS: Linux Ubuntu
    • Python version: 3.8
    • Additional libraries: None

    Description

    Please describe the details of your contribution Envs which have Multi modal (image, vector) input can be applied to all agents.

    opened by leonard-q 2
  • Ray Out Of Memory Error

    Ray Out Of Memory Error

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce python main.py --async --config config.r2d2.atari --env.name breakout python main.py --async --config config.muzero.atari --env.name qbert

    Expected behavior RayOutOfMemoryError

    Screenshots 스크린샷 2022-05-30 오후 6 46 40 스크린샷 2022-05-30 오후 5 07 28

    Development Env. (OS, version, libraries): Linux python 3.7.11 jorldy:0.3.0

    Additional context Add any other context about the problem here. https://stackoverflow.com/questions/60175137/out-of-memory-with-ray-python-framework https://github.com/ray-project/ray/issues/5572

    It seems that GC for ray shared memory doesn't work properly.

    bug 
    opened by kan-s0 1
  • Non-episodic update of Multistep agent

    Non-episodic update of Multistep agent

    Describe the bug A clear and concise description of what the bug is.

    Samples of Multistep agent has trash value about post-terminal state.

    To Reproduce Steps to reproduce the behavior:

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Development Env. (OS, version, libraries): Please describe current development environment

    Additional context Add any other context about the problem here.

    bug 
    opened by erinn-lee 1
  • update put&timeout to put_nowait

    update put&timeout to put_nowait

    update put&timeout to put_nowait

    :star2: Hello! Thanks for contributing JORLDY!

    Checklist

    Please check if you consider the following items.

    • [x] My code follows the style guidelines of this project contributing
    • [x] My code follows the naming convention of documentation
    • [x] I have commented my code, particularly in hard-to-understand areas
    • [x] My changes generate no new warnings or errors

    Types of changes

    Please describe the types of changes! (ex. Bugfix, New feature, Documentation, ...)

    Test Configuration

    • OS:
    • Python version:
    • Additional libraries:

    Description

    Please describe the details of your contribution

    optimize put method

    opened by ramanuzan 1
  • memory size in test_r2d2_agent.py

    memory size in test_r2d2_agent.py

    Describe the bug A clear and concise description of what the bug is.

    agent.memory.size is not defined correctly

    To Reproduce Steps to reproduce the behavior:

    run pytest after uncomment agent.memory.size

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem. image

    Development Env. (OS, version, libraries): Please describe current development environment Linux Ubuntu

    Additional context Add any other context about the problem here.

    bug 
    opened by leonard-q 1
  • Couldn't launch the

    Couldn't launch the "Server/DroneDelivery"

    Describe the bug

    mlagents_envs.exception.UnityEnvironmentException:
    
    Couldn't launch the ./core/env/mlagents/DroneDelivery/Server/DroneDelivery environment. 
    Provided filename does not match any environments.
    

    To Reproduce

    # docker
    docker build -t jorldy -f ./docker/Dockerfile .
    docker run -it --rm --name jorldy -v `pwd`:/JORLDY jorldy /bin/bash
    
    python sync_distributed_train.py --config=config.ppo.drone_delivery_mlagent
    

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots

    Development Env. (OS, version, libraries): Ubuntu 18.04.5 LTS", mlagents-envs 0.26.0

    Additional context Add any other context about the problem here.

    bug 
    opened by zenoengine 1
  • Errors when running Drone_Challenge

    Errors when running Drone_Challenge

    Describe the bug

    1. Not running mlagents until I stalled hiredis
    2. DroneDelivery env error, I think it's corrupted.

    To Reproduce pip install -r requirements.txt python sync_distributed_train.py --config=config.ppo.drone_delivery_mlagent

    Expected behavior

    First, After I had installed requirements.txt I followed the commands "python sync_distributed_train.py --config=config.ppo.drone_delivery_mlagent" Then I saw "redis-py works best with hiredis please consider installing" in my case it's not causing any problem to run mlagents. but one on my friend couldn't run it until he installed hiredis.

    Second, When I run mlagents. I could barely see Drone, Destination points. (please see the pic I attached) By overwriting files with this I could solve the problem.

    Please check these errors. Thanks

    Screenshots image

    Development Env. (OS, version, libraries): Windows 10, Anaconda, Python3.8.8

    bug 
    opened by pnltoen 1
  • pre-check discrete or continuous action by algorithms

    pre-check discrete or continuous action by algorithms

    Is your feature request related to a problem? Please describe. Hi, thank you for sharing this project. For now it seems DQN doesn't check discrete or continuous in advance. When I change dqn.cartpole config

    env = {
        "name":"cartpole",
        "render":False,
    }
    

    to

    env = {
        "name":"cartpole",
        "render":False,
        "mode":"continuous",
    }
    

    it doesn't give any errors and isn't trained well. Since DQN is an algorithm for discrete action and buffer gives integer actions so continuous Cartpole env only run action = 1. (and I didn't really look into that other algorithms check the actions, but DQN doesn't)

    Describe the solution you'd like It might be possible to insert assert statement in each algorithm codes.

    Describe alternatives you've considered x

    Additional context x

    enhancement 
    opened by HanbumKo 1
  • Unavailable moduels ['mlagent', 'mujoco', 'nes', 'procgen']

    Unavailable moduels ['mlagent', 'mujoco', 'nes', 'procgen']

    Describe the bug Unavailable moduels ['mlagent', 'mujoco', 'nes', 'procgen'] module: mlagent error: Traceback (most recent call last): File "e:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env_init_.py", line 21, in module = import(module_path, fromlist=[None]) File "e:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env\mlagent.py", line 1, in from mlagents_envs.environment import UnityEnvironment, ActionTuple ModuleNotFoundError: No module named 'mlagents_envs'

    and ModuleNotFoundError: No module named 'mujoco_py' ModuleNotFoundError: No module named 'nes_py'

    and ImportError: cannot import name 'ProcgenEnv' from partially initialized module 'procgen' (most likely due to a circular import) (e:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env\procgen.py)

    To Reproduce Steps to reproduce the behavior: main.py default_config_path = "config.ppo.pong_mlagent" and run

    when i pip install mlagents-envs Couldn't launch the ./core/env/mlagents/Pong/Windows/Pong environment. Provided filename does not match any environments. File "E:\study\machineStudy\project\Jorldy\JORLDY\jorldy\core\env\mlagent.py", line 37, in init self.env = UnityEnvironment(

    I change the mlagent code

        rootPath = os.path.abspath(os.path.dirname(__file__))+"/../../"
        env_path =rootPath+ f"./core/env/mlagents/{env_name}/{match_build()}/{env_name}"
    

    and it is run

    when it is run end program is no end when use async_distributed_train mlagent

    the last log: Interact process done.

    Expected behavior no error and run train success,and end success

    Development Env. (OS, version, libraries): windows 10

    bug 
    opened by xiezhipeng-git 0
  • R2D2 optimize and benchmark

    R2D2 optimize and benchmark

    Is your feature request related to a problem? Please describe. Currently, the state type stored as a transition in R2D2 is too large as float64. And if the sequence length is lengthened accordingly, the existing buffer size is too large.

    Describe the solution you'd like

    • Change the state type of transition to unit8.
    • Reduce the buffer size of the config.
    • R2D2 atari benchmark

    Describe alternatives you've considered

    • Fixed size when adding state to _transition in agent interact callback.

    Additional context

    • R2D2 atari benchmark
    enhancement 
    opened by kan-s0 0
  • MuZero performance issue

    MuZero performance issue

    Describe the bug MuZero shows very good performance in some environment such as cartpole, pong mlagent, atari (pong, breakout). However, it shows bad performance in most of the Atari environment (spaceinvaders, qbert, enduro, seaquest, ...)

    To Reproduce Try running MuZero algorithm in environments other than pong and breakout

    Expected behavior It shows worse performance when compared to other algorithms.

    Screenshots

    Development Env. (OS, version, libraries): Linux, Python 3.8, jorldy 0.3.0 requirement

    Additional context Add any other context about the problem here.

    bug 
    opened by leonard-q 0
  • Multi-GPU

    Multi-GPU

    Please describe the feature you want to add. A clear and concise description of what the feature. Ex. I'm going to implement ...

    Use Multi-GPU

    Additional requirement A clear and concise description of additional requirement for the new feature

    Reference Please append the reference about the feature

    enhancement 
    opened by erinn-lee 0
  • Invalid probability value in tensor when running mpo

    Invalid probability value in tensor when running mpo

    Describe the bug RuntimeError when running mpo

    To Reproduce

    python main.py --config.mpo.atari --env.name breakout --sync
    

    When config is modified with the values shown in the paper, it occurs faster and more frequently.

    Expected behavior

    • An error occurred when calculating multinomial method with pi from Actor network.
    • RuntimeError: probability tensor contains either inf, nan or element < 0

    Screenshots

    training graph

    스크린샷 2022-04-18 오후 2 36 23

    • default config, green, also causes an error at 7M.

    error txt

    스크린샷 2022-04-18 오후 2 23 06

    mpo generated agent code

    스크린샷 2022-04-18 오후 2 28 12

    Development Env. (OS, version, libraries):

    • linux
    • V4XLARGE
    • python 3.7.11
    • jorldy:0.3.0

    Additional context

    • Even with default config, an error sometimes occurs after a lot of learning.
    • If you set the config to the value shown in the paper, you get a much higher score at the beginning, but an error quickly occurs.
    bug 
    opened by kan-s0 0
Releases(v0.5.0)
  • v0.5.0(Apr 18, 2022)

    ❗Important

    • JORLDY ArXiv Paper is published! (link)
    • Algorithm description is added! (#168) (link)

    🛠️ Fixes & Improvements

    • PPO continuous debugging is done (#157)
    • Initialize actors network as a learner network (#165)

    🔩 Minor fix

    • Modify to reset rollout buffer stamp to 0 (#165)

    ⏰ Known Issues

    • R2D2 need to be optimized
    • IQN based algorithms debugging should be done
    • VMPO performance is unstable (#164)

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.5.0: @leonard-q , @ramanuzan, @kan-s0, @erinn-lee
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Apr 4, 2022)

    🛠️ Fixes & Improvements

    • Update Pytorch version to 1.10 and other packages (#139)
    • ICM and RND debugging is done (#145)
    • APE-X debugging is done (#147)
    • SAC discrete implemented (#150)

    🔩 Minor fix

    • Update Readme (contributors) (#138)
    • Update distributed architecture flowchart and timeline (#143)
    • Learning rate decay can be set as optional (#151)
    • Split optimizer of ICM and RND from PPO (#152)
    • modify calculating async step (#154)

    ⏰ Known Issues

    • R2D2 need to be optimized
    • IQN based algorithms have to be evaluated

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.4.0: @leonard-q , @ramanuzan, @kan-s0, @erinn-lee
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Mar 10, 2022)

    ❗Important

    • Integrate scripts into one main script (#125)
    • TD3 is implemented (#127)
    • R2D2 is implemented, but it needs to be optimized (#104)

    🛠️ Fixes & Improvements

    • Edit stamp step calc; reset to 0 → -= period step(#130)
    • implement gather thread to process get from queue with thread(update manage process with it)(#130)
    • Intergrate dqn network, deterministic policy actor, critic (#129)
    • Add lr scheduler to all RL algorithms (#108)

    🔩 Minor fix

    • Delete unused variable in ddqn (#128)

    ⏰ Known Issues

    • ICM PPO and RND PPO performance degrades after ppo is modified. It needs to be fixed
    • R2D2 need to be optimized
    • APE-X debugging has to be done
    • IQN based algorithms have to be evaluated

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.3.0: @leonard-q , @ramanuzan, @kan-s0, @erinn-lee
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 27, 2022)

    ❗Important

    • Atari wrapper is modified with reference to openai baselines wrapper(#92)
      • EpisodicLifeEnv, MaxAndSkipEnv, ClipRewardEnv(sign) are applied
      • reference: https://github.com/openai/baselines/blob/master/baselines/common/atari_wrappers.py

    🛠️ Fixes & Improvements

    • Error in Drone Delivery Env Mac build is fixed (#94)
    • Mujoco is supported in docker (#96)
    • PPO algorithm debugging is done (#103)
      • Implement value-clip
        • reference: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/ppo2/model.py#L133
      • Update log clac to prevent gradient divergence; prob_tensor.log() → Categorical.log_prob()
      • Change the advantage standardization order; before value calc → after value calc
      • Add custom LR scheduler (DQN, PPO) (#103)

    ⏰ Known Issues

    • ICM PPO and RND PPO performance degrades after ppo is modified. It needs to be fixed

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.2.0: @leonard-q , @ramanuzan
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 23, 2021)

    ❗Important - Unit test codes are implemented! - M-DQN, M-IQN are implemented! (#79) - Mujoco envs are supported! (#83)

    🛠️Fixes & Improvements - RND code refactoring (#52) occurs fatal error → It is solved with changing parameter name of RND (#71) - Change default initialization method (Xavier → Orthogonal) (#81) - Change Softmax to exp(log_softmax) (#82) - Unit test for Mujoco env is done (#93)

    🙏Acknowledgement - Thanks to all who contributes JORLDY v0.1.0: @leonard-q @ramanuzan @lkm2835

    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(Nov 23, 2021)

    • Important
      • Github action is applied for Python code style (PEP8). Please refer to style guide of CONTRIBUTING.md
      • New environment: Drone Delivery ML-Agents Environment is added! 🛸
      • ML-Agents Server builds are removed! Linux build with no_graphics option can be run on the Server. (#58)
    • Fixes & Improvements
      • JORLDY supports envs which provides multi modal input (image, vector)
      • mlagents Windows issue
        • Issue #44 was occurred when mlagents envs were run in Windows
        • #46 solved this problem (Thank you so much @zenoengine )
      • mlagents Linux build Issue
        • mlagents envs had error, because .gitignore contains *.so. It removes all the .so files in mlagents envs. Therefore, all the .so files are restored and .gitignore is modified.
      • ICM, RND code refactoring is conducted because of the duplicated functions (#52)
      • ICM PPO bug fix: remove softmax before calc cross-entropy (#49)
      • *_timers.json files in mlagent envs caused conflict when using git, *_timers.json files are added to .gitignore (#59)
      • Benchmark is developed! → config, script, spec are added
    • Acknowledgement
      • Thanks to all who contributes JORLDY v0.0.3: @zenoengine @ramanuzan @leonard-q
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Nov 6, 2021)

    📢 Important

    • Now JORLDY fully supports Windows, Mac and Linux!

    🛠️ Fixes & Improvements

    • README minor fix
      • Remove $, >
      • fixed typos
    • modify gitignore; add python gitignore template
    • supports WSL, Windows and Mac
      • change agent instantiation code #28
      • custom dict can be pickled
      • multiprocessing qsize() → empty, full
    • remove _nomp.py files
      • solve multiprocessing issue on all OS

    🙏 Acknowledgement

    • Thanks to all who contributes JORLDY v0.0.2: @zenoengine, @ramanuzan, @leonard-q
    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Nov 3, 2021)

    Hello WoRLd! ✋ This is first version of JORLDY, which is open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise! We expect that JORLDY helps researchers and students who study RL. The features of JORLDY are as follows ⭐.

    • 20+ RL Algorithms and various RL environment are provided
    • Algorithms and environment can be added and customized
    • The running of RL algorithm and environment is conducted using single command
    • Distributed RL algorithms are provided using ray
    • Benchmark of the algorithms is conducted in many RL environment

    🤖 The implemented algorithms are as follows:

    • Deep Q Network (DQN), Double DQN, Dueling DQN, Multistep DQN, Prioritized Experience Replay (PER), C51, Noisy Network, Rainbow (DQN, IQN), QR-DQN, IQN, Curiosity Driven Exploration (ICM), Random Network Distillation (RND), APE-X, REINFORCE, DDPG, PPO, SAC, MPO, V-MPO

    🌎 The provided environments are as follows

    • GYM classic control, Unity ML-Agents, Procgen,
      • GYM Atari and Super Mario Bros are excluded from the requirement because of the license issue. You should install these environments manually.
    Source code(tar.gz)
    Source code(zip)
Owner
Kakao Enterprise Corp.
Kakao Enterprise Corp.
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Scaleout 75 Nov 9, 2022
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
A deep learning based semantic search platform that computes similarity scores between provided query and documents

semanticsearch This is a deep learning based semantic search platform that computes similarity scores between provided query and documents. Documents

null 1 Nov 30, 2021
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

null 88 Nov 22, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

null 172 Dec 18, 2022
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021
A task Provided by A respective Artenal Ai and Ml based Company to complete it

A task Provided by A respective Alternal Ai and Ml based Company to complete it .

Parth Madan 1 Jan 25, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 7, 2022
Learning to trade under the reinforcement learning framework

Trading Using Q-Learning In this project, I will present an adaptive learning model to trade a single stock under the reinforcement learning framework

Uirá Caiado 470 Nov 28, 2022
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 170.1k Jan 4, 2023
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 7, 2023
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 170.1k Jan 5, 2023
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 153.2k Feb 13, 2021
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022