Keeper for Ricochet Protocol, implemented with Apache Airflow

Overview

Ricochet Keeper

This repository contains Apache Airflow DAGs for executing keeper operations for Ricochet Exchange.

Usage

You will need to run this using Docker and Docker Compose.

docker-compose up

ℹ️ This will take a while the first time you do it ⚠️ You may need to increase your Docker memory to > 4GB, default is 2GB

Setup

After starting up Airflow, navigate to Admin > Connections and setup the following:

  • A HTTP connection called infura with the connection's Extra as:
{
"http_endpoint_uri": "YOUR_INFURA_HTTP_URI",
"wss_endpoint_uri": "YOUR_INFURA_WSS_URI"
}
  • Navigate to Admin > Variables and add the following:
    • distributor-address - the address used for executing distribute transactions
    • harvester-address - the address used for executing harvest transactions
    • reporter-address - the address used for reporting to Tellor
    • closer-address - the address used for closing streams
    • ricochet-exchange-addresses - add these addresses to value field:
    [ "0xeb367F6a0DDd531666D778BC096d212a235a6f78", "0x27C7D067A0C143990EC6ed2772E7136Cfcfaecd6", "0x5786D3754443C0D3D1DdEA5bB550ccc476FdF11D", "0xe0A0ec8dee2f73943A6b731a2e11484916f45D44", "0x8082Ab2f4E220dAd92689F3682F3e7a42b206B42", "0x3941e2E89f7047E0AC7B9CcE18fBe90927a32100", "0x71f649EB05AA48cF8d92328D1C486B7d9fDbfF6b", "0x47de4Fd666373Ca4A793e2E0e7F995Ea7D3c9A29", "0x94e5b18309066dd1E5aE97628afC9d4d7EB58161", "0xdc19ed26aD3a544e729B72B50b518a231cBAD9Ab", "0xC89583Fa7B84d81FE54c1339ce3fEb10De8B4C96", "0x9BEf427fa1fF5269b824eeD9415F7622b81244f5", "0x0A70Fbb45bc8c70fb94d8678b92686Bb69dEA3c3", "0x93D2d0812C9856141B080e9Ef6E97c7A7b342d7F", "0xE093D8A4269CE5C91cD9389A0646bAdAB2c8D9A3", "0xA152715dF800dB5926598917A6eF3702308bcB7e", "0x250efbB94De68dD165bD6c98e804E08153Eb91c6", "0x98d463A3F29F259E67176482eB15107F364c7E18" ]
    
    • ricochet-lp-addresses - the addresses of markets with harvest methods
    [ "0x0cb9cd99dbC614d9a0B31c9014185DfbBe392eb5"]
    
    • tellor-assets - the mapping of Coingecko token names and their Tellor request ID
    {
      "ethereum": 1,
      "wrapped-bitcoin": 60,
      "maker": 5,
      "matic-network": 6,
      "idle": 79
    }
    
  • Create an HTTP for each of the public addresses you used in the previous step:
    • Set the name this connection as the public address
    • Set the Login to the public address
    • Set the Password to the private key for the public address

Run

Run the keeper using Docker Compose

docker-compose up

Airflow runs on port 80 so navigate to http://localhost to access the UI. Once things have booted up, log in with username airflow and password airflow.

Run as daemon

Use:

docker-compose up -d
Comments
  • Keeper V2

    Keeper V2

    List of changes:

    • lighter build of apache airflow image (removed some uused stuff)
    • secured install of the keeper
      • customize password of airflow user, pg
      • autogeneration of the fernet key for metadata encryption
      • nginx entry point with self-signed ssl certificate to secure GUI
      • introduced fail2ban to avoid ssh bruteforce
    • automate deploy on custom platform via make script
    • automate deploy via terraform on aws for CI
    • automate sql tables creation
    • automate import of variables and connections
    • update to the latest python version and pg
    • introduced loki for centralized logging (WIP, not functional at the moment)
    • introduced statsd exporter for monitoring (WIP, not functional at the moment)
    opened by samirsalem 2
  • Create a staging keeper

    Create a staging keeper

    In order to test dags execution and in general that the keeper will work when some changes are made i'm working on a staging keeper to test those changes, this is part of the CI stack. All changes will be executed against this new keeper

    $1000 bounty 
    opened by samirsalem 1
  • Make new Docker Image

    Make new Docker Image

    Keepers have been using my tellor-airflow image but we need to incorporate a docker image here and build it without relying on the one I pushed to docker hub months age.

    This issue is to make a Dockerfile to use instead of my dockerhub image. A docker file for this can be found here: https://github.com/tellor-io/airflow

    good first issue help wanted $1000 bounty 
    opened by mikeghen 1
  • Test Transaction Status before Sending

    Test Transaction Status before Sending

    Many distribute calls fail for BAD_EXCHANGE_RATE which just means the exchange price on sushi/quickswap is no good compared to the "global" price reported by tellor. Right now we just send these txns and let them fail, wasting gas.

    This issue is to add a pre-send check in the ContractInteractionOperator to check that a transaction won't fail, something like:

    txn = self.function(...).buildTransaction(..)
    if not confirm_success(txn):
       return False
    ...
    

    where confirm_success will use web4py to verify the txn will be successful (without executing it).

    help wanted $2500 bounty 
    opened by mikeghen 1
  • Use Variables to set Schedules

    Use Variables to set Schedules

    This task involves converting the schedules for each DAG to use a Variable. Create variables like:

    distribution-schedule-interval -> "0 * * * *"
    harvest-schedule-interval -> "0 0 * * *"
    

    Then in the DAGs:

    schedule_interval = Variables.get("distribution-schedule-interval", "0 * * * *")
    
     dag = DAG("ricochet_stream_watch",
              max_active_runs=1,
              catchup=False,
              default_args=default_args,
              schedule_interval=schedule_interval)
    $1000 bounty 
    opened by mikeghen 1
  • Secure the keeper

    Secure the keeper

    secured install of the keeper: customized password of airflow user, pg autogeneration of the fernet key for metadata encryption nginx entry point with self-signed ssl certificate to secure GUI introduced fail2ban to avoid ssh bruteforce All changes can be found here: https://github.com/Ricochet-Exchange/ricochet-keeper/pull/39

    $4000 bounty 
    opened by samirsalem 0
  • Dockerfile

    Dockerfile

    This needs to be tested by configuring all variables and by creating sql tables,

    • A new version of apache airflow is used
    • newer python libraries are used
    • permissions issue with logs folder is corrected
    • airflow sources are necessarily in repo in order to build the container (should this be automated before building the container?)
    • Useless grafana container is removed from the build
    • docker-compose up takes considerably more time for building the complete stack
    opened by samirsalem 0
  • Consolidate ricochet_tellor_reporter workflow into distribute_v2 workflow

    Consolidate ricochet_tellor_reporter workflow into distribute_v2 workflow

    Stale price data for the rexmarkets is common when the prices aren't being updated by keepers. This task is to trigger oracle updates as part of the distribute_v2_workflow. Reference the ricochet_tellor_reporter workflow for the code to trigger oracle updates.

    good first issue help wanted 
    opened by mikeghen 0
  • Keep on multiple networks

    Keep on multiple networks

    We need to be able to keep on multiple networks. Superfluid is deployed on Gnosis Chain and Avalanche. Getting REX setup on these networks will increase volume.

    As a first step, the existing DAGs should be changed so that if there's multiple networks, then there will be multiple DAGs.

    A single DAG script usually creates 1 dag, the pattern is 1 file 1 dag. However, we can use a single file to make multiple DAGs.

    One solution is to add a variable networks = ['polygon', 'gnosis', 'avalanche'] and then at the top of the DAG files use:

    for network in Variables.networks:
       dag = DAG("ricochet_distribute_{network}", ...)
    

    A single DAG file will then make multiple DAGs:

    ricochet_distribute_polygon
    ricochet_distribute_gnsois
    ricochet_distribute_avalanche
    
    help wanted 
    opened by mikeghen 0
  • CI mechanism for keepers

    CI mechanism for keepers

    A mechanism was needed to test changes in DAG's and updates of airflow. this mechanism is described in the PR down below https://github.com/Ricochet-Exchange/ricochet-keeper/pull/39

    opened by samirsalem 0
  • Trigger a notification when the price has not been updated

    Trigger a notification when the price has not been updated

    Suggested by a Superfluid member.

    Goal: trigger a notification if the price has not been updated in four hours. I'm not sure how to grab keeper status. I don't run a keeper node. I hope this can be clarified.

    opened by josealonso 0
Owner
Ricochet Exchange
Ricochet Exchange
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 2, 2023
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

null 155 Jan 8, 2023
Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol.

Updated Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol. Introduction This balenaCloud (previously

Remko 1 Oct 17, 2021
Tightness-aware Evaluation Protocol for Scene Text Detection

TIoU-metric Release on 27/03/2019. This repository is built on the ICDAR 2015 evaluation code. If you propose a better metric and require further eval

Yuliang Liu 206 Nov 18, 2022
Official Python implementation of the FuzionCoin protocol

PyFuzc Official Python implementation of the FuzionCoin protocol WARNING: Under construction. Use at your own risk. Some functions may not work. Setup

FuzionCoin 3 Jul 7, 2022
JstDoS - HTTP Protocol Stack Remote Code Execution Vulnerability

jstDoS If you are going to skid that, please give credits ! ^^ ¿How works? This

apolo 4 Feb 11, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 4, 2023
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 4, 2023
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022
Machine learning evaluation metrics, implemented in Python, R, Haskell, and MATLAB / Octave

Note: the current releases of this toolbox are a beta release, to test working with Haskell's, Python's, and R's code repositories. Metrics provides i

Ben Hamner 1.6k Dec 26, 2022
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch

PyTorch implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping Paper: https://arxiv.org/abs/2102.06171.pdf Original code: htt

Vaibhav Balloli 320 Jan 2, 2023
SimDeblur is a simple framework for image and video deblurring, implemented by PyTorch

SimDeblur (Simple Deblurring) is an open source framework for image and video deblurring toolbox based on PyTorch, which contains most deep-learning based state-of-the-art deblurring algorithms. It is easy to implement your own image or video deblurring or other restoration algorithms.

null 220 Jan 7, 2023
Chinese Mandarin tts text-to-speech 中文 (普通话) 语音 合成 , by fastspeech 2 , implemented in pytorch, using waveglow as vocoder,

Chinese mandarin text to speech based on Fastspeech2 and Unet This is a modification and adpation of fastspeech2 to mandrin(普通话). Many modifications t

null 291 Jan 2, 2023
Object tracking implemented with YOLOv4, DeepSort, and TensorFlow.

Object tracking implemented with YOLOv4, DeepSort, and TensorFlow. YOLOv4 is a state of the art algorithm that uses deep convolutional neural networks to perform object detections. We can take the output of YOLOv4 feed these object detections into Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) in order to create a highly accurate object tracker.

The AI Guy 1.1k Dec 29, 2022
null 202 Jan 6, 2023
Implemented fully documented Particle Swarm Optimization algorithm (basic model with few advanced features) using Python programming language

Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social learning coefficients and maximum velocity of the particle.

null 9 Nov 29, 2022