Bittensor - an open, decentralized, peer-to-peer network that functions as a market system for the development of artificial intelligence

Overview

Bittensor

Pushing Image to Docker Discord Chat PyPI version License: MIT


Internet-scale Neural Networks

DiscordDocsNetworkResearchCode

At Bittensor, we are creating an open, decentralized, peer-to-peer network that functions as a market system for the development of artificial intelligence. Our purpose is not only to accelerate the development of AI by creating an environment optimally condusive to its evolution, but to democratize the global production and use of this valuable commodity. Our aim is to disrupt the status quo: a system that is centrally controlled, inefficient and unsustainable. In developing the Bittensor API, we are allowing standalone engineers to monetize their work, gain access to sophisticated machine intelligence models and join our community of creative, forward-thinking individuals. For more info, read our paper.

1. Documentation

https://app.gitbook.com/@opentensor/s/bittensor/

2. Install

Two ways to install Bittensor.

  1. Through installer (recommended):
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/opentensor/bittensor/master/scripts/install.sh)"
  1. Through pip (Advanced):
$ pip3 install bittensor

3. Using Bittensor

The following examples showcase how to use the Bittensor API for 3 seperate purposes.

3.1. Client

For users that want to explore what is possible using on the Bittensor network.

Open In Colab

import bittensor
import torch
wallet = bittensor.wallet().create()
graph = bittensor.metagraph().sync()
representations, _ = bittensor.dendrite( wallet = wallet ).forward_text (
    endpoints = graph.endpoints,
    inputs = "The quick brown fox jumped over the lazy dog"
)
representations = // N tensors with shape (1, 9, 1024)
...
// Distill model. 
...
loss.backward() // Accumulate gradients on endpoints.

3.2. Server

For users that want to serve up a custom model onto the Bittensor network

Open In Colab

import bittensor
import torch
from transformers import BertModel, BertConfig

model = BertModel( BertConfig(vocab_size = bittensor.__vocab_size__, hidden_size = bittensor.__network_dim__) )
optimizer = torch.optim.SGD( [ {"params": model.parameters()} ], lr = 0.01 )

def forward_text( pubkey, inputs_x ):
    return model( inputs_x )
  
def backward_text( pubkey, inputs_x, grads_dy ):
    with torch.enable_grad():
        outputs_y = model( inputs_x.to(device) ).last_hidden_state
        torch.autograd.backward (
            tensors = [ outputs_y.to(device) ],
            grad_tensors = [ grads_dy.to(device) ]
        )
        optimizer.step()
        optimizer.zero_grad() 

wallet = bittensor.wallet().create()
axon = bittensor.axon (
    wallet = wallet,
    forward_text = forward_text,
    backward_text = backward_text
).start().subscribe()

3.3. Validator

For users that want to validate the models that currently on the Bittensor network

Open In Colab

import bittensor
import torch

graph = bittensor.metagraph().sync()
dataset = bittensor.dataset()
chain_weights = torch.ones( [graph.n.item()], dtype = torch.float32 )

for batch in dataset.dataloader( 10 ):
    ...
    // Train chain_weights.
    ...
bittensor.subtensor().set_weights (
    weights = chain_weights,
    uids = graph.uids,
    wait_for_inclusion = True,
    wallet = bittensor.wallet(),
)

4. Features

4.1. Creating a bittensor wallet

$ bittensor-cli new_coldkey --wallet.name <WALLET NAME>
$ bittensor-cli new_hotkey --wallet.name <WALLET NAME> --wallet.hotkey <HOTKEY NAME>

4.2. Selecting the network to join

There are two open Bittensor networks: Kusanagi and Akatsuki.

  • Kusanagi is the test network. Use Kusanagi to get familiar with Bittensor without worrying about losing valuable tokens.
  • Akatsuki is the main network. The main network will reopen on Bittensor-akatsuki: November 2021.
$ export NETWORK=akatsuki 
$ python (..) --subtensor.network $NETWORK

4.3. Running a template miner

The following command will run Bittensor's template miner

$ python ~/.bittensor/bittensor/miners/text/template_miner.py

OR with customized settings

$ python ~/.bittensor/bittensor/miners/text/template_miner.py --wallet.name <WALLET NAME> --wallet.hotkey <HOTKEY NAME>

For the full list of settings, please run

$ python ~/.bittensor/bittensor/miners/text/template_miner.py --help

4.4. Running a template server

The template server follows a similar structure as the template miner.

$ python ~/.bittensor/bittensor/miners/text/template_server.py --wallet.name <WALLET NAME> --wallet.hotkey <HOTKEY NAME>

For the full list of settings, please run

$ python ~/.bittensor/bittensor/miners/text/template_server.py --help

4.5. Subscription to the network

The subscription to the bittensor network is done using the axon. We must first create a bittensor wallet and a bittensor axon to subscribe.

import bittensor

wallet = bittensor.wallet().create()
axon = bittensor.axon (
    wallet = wallet,
    forward_text = forward_text,
    backward_text = backward_text
).start().subscribe()

4.6. Syncing with the chain/ Finding the ranks/stake/uids of other nodes

Information from the chain are collected by the metagraph.

import bittensor

meta = bittensor.metagraph()
meta.sync()

# --- uid ---
print(meta.uids)

# --- hotkeys ---
print(meta.hotkeys)

# --- ranks ---
print(meta.R)

# --- stake ---
print(meta.S)

4.7. Finding and creating the endpoints for other nodes in the network

import bittensor

meta = bittensor.metagraph()
meta.sync()

### Address for the node uid 0
address = meta.endpoints[0]
endpoint = bittensor.endpoint.from_tensor(address)

4.8. Querying others in the network

import bittensor

meta = bittensor.metagraph()
meta.sync()

### Address for the node uid 0
address = meta.endpoints[0]

### Creating the endpoint, wallet, and dendrite
endpoint = bittensor.endpoint.from_tensor(address)
wallet = bittensor.wallet().create()
den = bittensor.dendrite(wallet = wallet)

representations, _ = den.forward_text (
    endpoints = endpoint,
    inputs = "Hello World"
)

4.9. Creating a Priority Thread Pool for the axon

import bittensor
import torch
from nuclei.server import server

model = server(config=config,model_name='bert-base-uncased',pretrained=True)
optimizer = torch.optim.SGD( [ {"params": model.parameters()} ], lr = 0.01 )
threadpool = bittensor.prioritythreadpool(config=config)
metagraph = bittensor.metagraph().sync()

def forward_text( pubkey, inputs_x ):
    def call(inputs):
        return model.encode_forward( inputs )

    uid = metagraph.hotkeys.index(pubkey)
    priority = metagraph.S[uid].item()
    future = threadpool.submit(call,inputs=inputs_x,priority=priority)
    try:
        return future.result(timeout= model.config.server.forward_timeout)
    except concurrent.futures.TimeoutError :
        raise TimeoutError('TimeOutError')
  

wallet = bittensor.wallet().create()
axon = bittensor.axon (
    wallet = wallet,
    forward_text = forward_text,
).start().subscribe()

5. License

The MIT License (MIT) Copyright © 2021 Yuma Rao

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

6. Acknowledgments

learning-at-home/hivemind

Comments
  • [feature] [BIT 578] speed up metagraph storage query

    [feature] [BIT 578] speed up metagraph storage query

    This PR speeds up the storage call made bysubtensor.neurons in the subtensor.use_neurons_fast function.

    This feature works by bundling a nodejs binary with the polkadotjs API.
    This binary is a CLI that implements the sync_and_save --filename <default:~/.bittensor/metagraph.json> --block_hash <default:latest> command.
    This syncs the metagraph at the blockhash and saves it to a json file.

    The speed-up is quite significant, below is a test run of the manual without the fix, with the ipfs cache, and with the fix.
    output

    And below is the IPFS cache sync versus the manual sync (with fix)

    output_cach_vs_fixed_manual

    A pro of this is that it removes the need for a centralized IPFS cache of the metagraph.

    A downside of this fix is that the binaries with nodejs bundled use ~50MB each (one linux, one macos).
    There is currently no binary for windows, but I'm not certain this should be included anyway, as we only support linux/macos.

    Another pro of this fix is it works on both nobunaga and nakamoto, and can be adapted to any network. This also leaves room for adding other large substrate queries and working further with the polkadot js api.

    do not merge 
    opened by camfairchild 9
  • Fix Docker Build and Docker-Compose

    Fix Docker Build and Docker-Compose

    This PR:

    - Irons out the Python version inconsistency between the install scripts and the Dockerfile

    Right now, the Docker image builds with Python3.7, but the install.sh script per the install instructions installs Python3.8. This is because Python3 is installed in the install script, but we specifically point out 3.7 in the Dockerfile. This causes the docker container to not be compatible with the new keccak update, breaking it.

    - Fixes the Docker Compose spec to have the command correctly route and find the Bittensor Python library

    Previously, we were not specifying the path to the Bittensor Python library. By specifying the PYTHONPATH environment variable, we are able to tell it where to go to find it.

    opened by rmaronn 7
  • Add signature v2 format

    Add signature v2 format

    Summary

    References https://github.com/opentensor/bittensor/pull/976

    This PR introduces a new signature format for requests, in order to avoid situations in which validator requests are fulfilled by nodes which are not targeted by the RPC.

    This PR is based on #976 in order to avoid merge conflicts - only the last two commits are relevant here.

    Changes

    • Remove the static header check. Receptors will still keep adding it, but it is ignored from now on.
    • Add v2 signature format, which also takes into account the target axon hotkey. The v2 signature ensures that the signature cannot be faked by an intermediary that is not a validator.
    • Ensure that nonces cannot be replayed by disallowing equality. Allowing nonce equality renders the nonce moot.
    • On chain parameters of an axon are now always updated. Previously they would be updated only on IP/port changes.
    enhancement feature release/3.6.0 
    opened by adriansmares 6
  • Fallback subtensor

    Fallback subtensor

    Adds fallback endpoints to bittensor.subtensor allowing the user to pass a list of fallback endpoints in the event that the default fails.

    btcli overview --subtensor.chain_endpoint badendpoint --subtensor.fallback_endpoints AtreusLB-2c6154f73e6429a9.elb.us-east-2.amazonaws.com:9944 --no_prompt --logging.debug

    opened by unconst 6
  • Support arbitrary gRPC request metadata order

    Support arbitrary gRPC request metadata order

    Summary

    This PR fixes the gRPC server interceptor logic such that it can handle any metadata order. gRPC request metadata is a set of key values - it has no order. So depending on the fact that the order of the sender is equivalent in the receiver is not correct.

    gRPC metadata is sent over as HTTP headers, and the order of HTTP headers may be changed by intermediary proxies.

    Changes

    • Format the AuthInterceptor using black.
    • Use the invocation metadata as a dictionary, not a list of key-value pairs.
    • Do not trust the user provided request_type and use the gRPC method instead.
    • Fix request_type provided for backward calls.

    Testing

    Tested locally by proxying the axon traffic using Traefik.

    enhancement feature 
    opened by adriansmares 4
  • changed local train and remote train descriptions

    changed local train and remote train descriptions

    local and remote train sometimes cause bugs if set to false. The default is already false, so an extra note is added to remind users to only use this flag when passing true

    do not merge 
    opened by quac88 4
  • [feature] [CUDA solver] Add multi-GPU and ask for CUDA during btcli run

    [feature] [CUDA solver] Add multi-GPU and ask for CUDA during btcli run

    This PR adds

    • Multi-GPU registration capability (one process per GPU + master)
      • btcli register --cuda.dev_id 0 1 2 3 --cuda.use_cuda
    • A prompt for CUDA registration during btcli run
      • can skip this prompt with btcli run --wallet.reregister false
    enhancement feature 
    opened by camfairchild 4
  • No Serve

    No Serve

    adds a flag --neuron.no_serve that stops an axon from being served for a validator.

    useful for those running validators and servers from the same hotkey and need the axon port to be used by their miner.

    opened by CreativeBuilds 3
  • Release 3.4.2

    Release 3.4.2

    [Fix] promo change to axon and dendrite https://github.com/opentensor/bittensor/pull/981 [Feature] no version checking flag https://github.com/opentensor/bittensor/pull/974 [Fix] add perpet hash rate and adjust alpha https://github.com/opentensor/bittensor/pull/960 [Fix] stake conversion issue https://github.com/opentensor/bittensor/pull/958 [Feature] Dendrite asyncio https://github.com/opentensor/bittensor/pull/967 [Fix] Bit 590 backward fix https://github.com/opentensor/bittensor/pull/957 [Feature] No set weights https://github.com/opentensor/bittensor/pull/959 [Feature] Improve dataloader performance https://github.com/opentensor/bittensor/pull/950 [Fix] Remove locals from cli and bittensor common https://github.com/opentensor/bittensor/pull/947 [Fix] Minor fixes https://github.com/opentensor/bittensor/pull/955

    opened by unconst 3
  • [Fix] multi cuda fix

    [Fix] multi cuda fix

    This PR addresses the issues with CUDA registration introduced by v3.3.4 This fixes:

    • Improper termination And this should fix:
    • Reduced actual hashrate (does not match reported)
      • nonce_start += self.update_interval * self.num_proc
      • to
      • nonce_start += self.update_interval * self.TPB
    opened by camfairchild 3
  • [hotfix] fix flags for run command, fix hotkeys flag for overview, and [feature] CUDA reg

    [hotfix] fix flags for run command, fix hotkeys flag for overview, and [feature] CUDA reg

    #876 Fixes the flags

    The CPU register optimization introduced a bug where the btcli run command lacked some config flags that were used in CLI.register. This PR adds them in

    Edit:

    This PR also adds #868 and #867

    opened by camfairchild 3
  • Remove support for huge models

    Remove support for huge models

    Describe the bug The scaling law upgrade is actually backfiring on people running fine-tuned miners, because they are hitting a "glass ceiling" when it comes to loss. Scaling laws seem to prefer small miners with low loss.

    To Reproduce Steps to reproduce the behavior:

    1. Execute the following code with a loss of 2: https://github.com/opentensor/bittensor/blob/d532ad595d39f287f8bef445cc1823e6fcdadc3c/bittensor/_neuron/text/core_validator/init.py#L1000
    2. Execute the same code with a loss of 1.69.
    3. Execute the same code with a loss of 1.5.
    4. Execute the same code with a loss of 1.
    5. Execute the same code with a loss of 0.

    Expected behavior The reproduction should return a higher reproduction each time its being run. A loss of 0 is theoretically possible.

    Environment:

    • OS and Distro: N/A
    • Bittensor Version: 3.5.0

    Additional context A fine-tuned 2.7B receives the same weights as a unturned 6B and an unturned 20B. This triggered an investigation into the reason why this would be the case. Turns out, the scaling law has a minimum of 1.69, which is not mentioned in the corresponding paper and is known by some to be an incorrect estimation. The paper can be disproven by fact.

    opened by mrseeker 2
  • Add Mask to TextLastHiddenState

    Add Mask to TextLastHiddenState

    Adds masking to the TextLastHiddenState Synapse.

    For a return tensor of size (batch_size, sequence_length, hidden_state), we can now optionally pass a mask as: bittensor.synapse.TextLastHiddenState(mask: Optional[List[int]]). The mask will apply across the sequence_length dimension, i.e. if mask == [0], only the hidden state for the 0th element will be filled. Explicitly, return_tensor[:, 1:, :] would ==0.

    Alternatively a list of non-consecutive integers can be passed, e.g. bittensor.synapse.TextLastHiddenState(mask = [0, 5, 3]) would only return non-empty tensors for sequence indices 0, 5, and 3.

    This change can drastically decrease the network bandwidth.

    Alternatively, the user can specify a random mask d.text_last_hidden_state ( ... mask = random.choice( seq_len, k), ...) This random masking can be used for adversarial resistance. Or Jepa style training.

    opened by unconst 0
  • Way to monitor the distribution of the loss in live server.

    Way to monitor the distribution of the loss in live server.

    I am having difficulty deciding which models are performing best and worst. I need to monitor the loss distribution over a whole period of time,

    I am unable to get the Loss variable without local and remote train disabled in the https://github.com/opentensor/bittensor/tree/master/bittensor/_neuron/text)/core_server/run.py

    Describe alternatives you've considered I want to know where the loss variable in the live server is located. I may save to inside a DB and plot distribution or monitor the loss distribution for comparison of live run of different models

    image

    opened by ALI7861111 0
  • Bit 578 speed up metagraph storage query

    Bit 578 speed up metagraph storage query

    Note: this feature requires using https://github.com/opentensor/subtensor/pull/26 in the subtensor node that you query as the buffer size needs to be changed for the fast sync to work


    This PR adds the subtensorapi package to grab live storage from the chain in a faster manner.

    The current live-sync takes around ~8m (See #933) using only pure-python.

    The subtensorapi package wraps a nodejs binary in python and utilizes the @polkadot/api npm library to sync from the chain.

    This sync outputs as JSON to ~/.bittensor/metagraph.json (by default) and then is read into python before being returned to the bittensor package.

    Below is a current graph of the performance of subtensorapi (sapi) vs the ipfs sync (current cached-sync).
    The results may be worse than average as the request times are very node-dependent. This node is hosted on a cheap contabo VPS with heavy traffic. I expect request times to be similar to this, if not better.

    Screen Shot 2022-10-26 at 2 42 38 PM

    Further, subtensorapi can be extended to support other storage values. Currently it also supports the subtensorModule.blockAtRegistration map using Subtensor.blockAtRegistration_all_fast()

    enhancement feature do not merge 
    opened by camfairchild 6
Releases(v3.6.1)
  • v3.6.1(Dec 21, 2022)

    What's Changed

    • V3.6.0 nobunaga merge by @Eugene-hu in https://github.com/opentensor/bittensor/pull/1028
    • Integration dendrite test fixes by @Eugene-hu in https://github.com/opentensor/bittensor/pull/1029
    • Adding 3.6.0 release notes to CHANGELOG by @eduardogr in https://github.com/opentensor/bittensor/pull/1032
    • [BIT-612] Validator robustness improvements by @opentaco in https://github.com/opentensor/bittensor/pull/1034
    • [Hotfix 3.6.1] Validator robustness by @opentaco in https://github.com/opentensor/bittensor/pull/1035

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.6.0...v3.6.1

    Source code(tar.gz)
    Source code(zip)
  • v3.6.0(Dec 13, 2022)

    What's Changed

    • Removal of dendrite multiprocessing by @Eugene-hu in https://github.com/opentensor/bittensor/pull/1017
    • Merging back 3.5.1 fix to nobunaga by @eduardogr in https://github.com/opentensor/bittensor/pull/1018
    • Release/3.5.0 post release by @eduardogr in https://github.com/opentensor/bittensor/pull/1010
    • Fixes issue with --neuron.no_set_weights by @camfairchild in https://github.com/opentensor/bittensor/pull/1020
    • Removing GitHub workflow push docker by @eduardogr in https://github.com/opentensor/bittensor/pull/1011
    • [Fix] fix max stake for single by @camfairchild in https://github.com/opentensor/bittensor/pull/996
    • [Feature] mention balance if not no prompt by @camfairchild in https://github.com/opentensor/bittensor/pull/995
    • Add signature v2 format by @adriansmares in https://github.com/opentensor/bittensor/pull/983
    • Improving the way we manage requirements by @eduardogr in https://github.com/opentensor/bittensor/pull/1003
    • [BIT-601] Scaling law on EMA loss by @opentaco in https://github.com/opentensor/bittensor/pull/1022
    • [BIT-602] Update scaling power from subtensor by @opentaco in https://github.com/opentensor/bittensor/pull/1027
    • Release 3.6.0 by @eduardogr in https://github.com/opentensor/bittensor/pull/1023

    New Contributors

    • @adriansmares made their first contribution in https://github.com/opentensor/bittensor/pull/976

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.5.1...v3.6.0

    Source code(tar.gz)
    Source code(zip)
  • v3.5.1(Nov 25, 2022)

    What's Changed

    • [hotfix] pin scalecodec lower by @camfairchild in https://github.com/opentensor/bittensor/pull/1013

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.5.0...v3.5.1

    Source code(tar.gz)
    Source code(zip)
  • v3.5.0(Nov 24, 2022)

    What's Changed

    • [Fix] allow synapse all (https://github.com/opentensor/bittensor/pull/988)

      • allow set synapse All using flag
      • add test
      • use dot get
    • [Feature] Mark registration threads as daemons (https://github.com/opentensor/bittensor/pull/998)

      • make solver processes daemons
    • [Feature] Validator debug response table (https://github.com/opentensor/bittensor/pull/999)

      • Add response table to validator debugging
    • [Feature] Validator weight setting improvements (https://github.com/opentensor/bittensor/pull/1000)

      • Remove responsive prioritization from validator weight calculation
      • Move metagraph_sync just before weight setting
      • Add metagraph register to validator
      • Update validator epoch conditions
      • Log epoch while condition details
      • Consume validator nucleus UID queue fully
      • Increase synergy table display precision
      • Round before casting to int in phrase_cross_entropy
    • small fix for changelog and version by @Eugene-hu in https://github.com/opentensor/bittensor/pull/993

    • release/3.5.0 by @eduardogr in https://github.com/opentensor/bittensor/pull/1006

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.3...v3.5.0

    Source code(tar.gz)
    Source code(zip)
  • v3.4.3(Nov 15, 2022)

    What's Changed

    • [Hotfix] Synapse security update by @opentaco in https://github.com/opentensor/bittensor/pull/991

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.2...v3.4.3

    Source code(tar.gz)
    Source code(zip)
  • v3.4.2(Nov 9, 2022)

    What's Changed

    • Adding 3.4.0 changelog to CHANGELOG.md by @eduardogr in https://github.com/opentensor/bittensor/pull/953
    • Release 3.4.2 by @unconst in https://github.com/opentensor/bittensor/pull/970

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.1...v3.4.2

    Source code(tar.gz)
    Source code(zip)
  • v3.4.1(Oct 13, 2022)

    What's Changed

    • [Hotfix] Fix CUDA Reg update block by @camfairchild in https://github.com/opentensor/bittensor/pull/954

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.0...v3.4.1

    Source code(tar.gz)
    Source code(zip)
  • v3.4.0(Oct 13, 2022)

    What's Changed

    • Parameters update by @Eugene-hu #936
    • Bittensor Generate by @unconst #941
    • Prometheus by @unconst #928
    • [Tooling][Release] Adding release script by @eduardogr in https://github.com/opentensor/bittensor/pull/948

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.4...v3.4.0

    Source code(tar.gz)
    Source code(zip)
  • v3.3.4(Oct 3, 2022)

    What's Changed

    • [hot-fix] fix indent again. add test by @camfairchild in https://github.com/opentensor/bittensor/pull/907
    • Delete old gitbooks by @quac88 in https://github.com/opentensor/bittensor/pull/924
    • Release/3.3.4 by @Eugene-hu in https://github.com/opentensor/bittensor/pull/927

    New Contributors

    • @quac88 made their first contribution in https://github.com/opentensor/bittensor/pull/924

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.3...v3.3.4

    Source code(tar.gz)
    Source code(zip)
  • v3.3.3(Sep 6, 2022)

    What's Changed

    • [feature] cpu register faster by @camfairchild in https://github.com/opentensor/bittensor/pull/854
    • [hotfix] fix flags for multiproc register limit by @camfairchild in https://github.com/opentensor/bittensor/pull/876
    • Fix/diff unpack bit shift by @camfairchild in https://github.com/opentensor/bittensor/pull/878
    • [Feature] [cubit] CUDA registration solver by @camfairchild in https://github.com/opentensor/bittensor/pull/868
    • Fix/move overview args to cli by @camfairchild in https://github.com/opentensor/bittensor/pull/867
    • Add/address CUDA reg changes by @camfairchild in https://github.com/opentensor/bittensor/pull/879
    • [Fix] --help command by @camfairchild in https://github.com/opentensor/bittensor/pull/884
    • Validator hotfix min allowed weights by @Eugene-hu in https://github.com/opentensor/bittensor/pull/885
    • [BIT-552] Validator improvements (nucleus permute, synergy avg) by @opentaco in https://github.com/opentensor/bittensor/pull/889
    • Bit 553 bug fixes by @isabella618033 in https://github.com/opentensor/bittensor/pull/886
    • add check to add ws:// if needed by @camfairchild in https://github.com/opentensor/bittensor/pull/896
    • [BIT-572] Exclude lowest quantile from weight setting by @opentaco in https://github.com/opentensor/bittensor/pull/895
    • [BIT-573] Improve validator epoch and responsives handling by @opentaco in https://github.com/opentensor/bittensor/pull/901
    • Nobunaga Release V3.3.3 by @Eugene-hu in https://github.com/opentensor/bittensor/pull/899

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.2...v3.3.3

    Source code(tar.gz)
    Source code(zip)
  • v3.3.2(Aug 23, 2022)

    SynapseType fix in dendrite

    What's Changed

    • SynapseType fix in dendrite by @robertalanm in https://github.com/opentensor/bittensor/pull/874

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.1...v3.3.2

    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Aug 23, 2022)

    What's Changed

    • [hotfix] Fix GPU reg bug. bad indent by @camfairchild in https://github.com/opentensor/bittensor/pull/883

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.0...v3.3.1

    Source code(tar.gz)
    Source code(zip)
  • v3.3.0(Aug 23, 2022)

    CUDA registration

    This release adds the ability to complete the registration using a CUDA-capable device.
    See https://github.com/opentensor/cubit/releases/tag/v1.0.5 for the required cubit v1.0.5 release

    Also a few bug fixes for the CLI

    What's Changed

    • [hotfix] fix flags for run command, fix hotkeys flag for overview, and [feature] CUDA reg by @camfairchild in https://github.com/opentensor/bittensor/pull/877

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.2.0...v3.3.0

    Source code(tar.gz)
    Source code(zip)
  • v3.2.0(Aug 23, 2022)

    Validator saving and responsive-priority weight-setting

    What's Changed

    • [BIT-540] Choose responsive UIDs for setting weights in validator + validator save/load by @opentaco in https://github.com/opentensor/bittensor/pull/872

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.1.0...v3.2.0

    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Aug 23, 2022)

    Optimizing multi-processed CPU registration

    This release refactors the registration code for CPU registration to improve solving performance.

    What's Changed

    • [feature] cpu register faster (#854) by @camfairchild in https://github.com/opentensor/bittensor/pull/875

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.0.0...v3.1.0

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Aug 8, 2022)

  • v2.1.0(Aug 5, 2022)

Owner
Opentensor
Building Neurons. Turning the Web into a brain.
Opentensor
This is a python based command line Network Scanner utility, which input as an argument for the exact IP address or the relative IP Address range you wish to do the Network Scan for and returns all the available IP addresses with their MAC addresses on your current Network.

This is a python based command line Network Scanner utility, which input as an argument for the exact IP address or the relative IP Address range you wish to do the Network Scan for and returns all the available IP addresses with their MAC addresses on your current Network.

Abhinandan Khurana 1 Feb 9, 2022
A library of functions that can be used to manage the download of claims from the LBRY network.

lbrytools A library of functions that can be used to manage the download of claims from the LBRY network. It includes methods to download claims by UR

null 13 Dec 3, 2022
An advanced real time threat intelligence framework to identify threats and malicious web traffic on the basis of IP reputation and historical data.

ARTIF is a new advanced real time threat intelligence framework built that adds another abstraction layer on the top of MISP to identify threats and malicious web traffic on the basis of IP reputation and historical data. It also performs automatic enrichment and threat scoring by collecting, processing and correlating observables based on different factors.

CRED 225 Dec 31, 2022
A non-custodial oracle and escrow system for the lightning network. Make LN contracts more expressive.

Hodl contracts A non-custodial oracle and escrow system for the lightning network. Make LN contracts more expressive. If you fire it up, be aware: (1)

null 31 Nov 30, 2022
Qtas(Quite a Storage)is an experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Qtas(Quite a Storage)is a experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Jiaming Zhang 3 Jan 12, 2022
Qtas(Quite a Storage)is an experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Qtas(Quite a Storage)is a experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Jiaming Zhang 3 Jan 12, 2022
A Network tool kit for scanning active IP addresses and open ports

Network scanner A small project that I wrote on the fly for (IT351) Computer Networks University Course to identify and label the devices in my networ

Mohamed Abdelrahman 10 Nov 7, 2022
Simple P2P application for sending files over open and forwarded network ports.

FileShareV2 A major overhaul to the V1 (now deprecated) FileShare application. V2 brings major improvements in both UI and performance. V2 is now base

Michael Wang 1 Nov 23, 2021
Decentra Network is an open source blockchain that combines speed, security and decentralization.

Decentra Network is an open source blockchain that combines speed, security and decentralization. Decentra Network has very high speeds, scalability, asymptotic security and complete decentralization.

Decentra Network 74 Nov 22, 2022
Nautobot is a Network Source of Truth and Network Automation Platform.

Nautobot is a Network Source of Truth and Network Automation Platform. Nautobot was initially developed as a fork of NetBox (v2.10.4). Nautobot runs as a web application atop the Django Python framework with a PostgreSQL database.

Nautobot 549 Dec 31, 2022
This Tool can help enginners and biggener in network, the tool help you to find of any ip with subnet mask that can calucate them and show you ( Availble IP's , Subnet Mask, Network-ID, Broadcast-ID )

This Tool can help enginners and biggener in network, the tool help you to find of any ip with subnet mask that can calucate them and show you ( Availble IP's , Subnet Mask, Network-ID, Broadcast-ID )

null 12 Dec 13, 2022
nettrace is a powerful tool to trace network packet and diagnose network problem inside kernel.

nettrace nettrace is is a powerful tool to trace network packet and diagnose network problem inside kernel on TencentOS. It make use of eBPF and BCC.

null 84 Jan 1, 2023
PcapXray - A Network Forensics Tool - To visualize a Packet Capture offline as a Network Diagram

PcapXray - A Network Forensics Tool - To visualize a Packet Capture offline as a Network Diagram including device identification, highlight important communication and file extraction

Srinivas P G 1.4k Dec 28, 2022
NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.

NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.

NetworkX 12k Jan 2, 2023
The sequel to SquidNet. It has many of the previous features that were in the original script, however a lot of the functions that do not serve much functionality have been removed.

SquidNet2 The sequel to SquidNet. It has many of the previous features that were in the original script, however a lot of the functions that do not se

DrSquidX 5 Mar 25, 2022
DataShare - Simple library for data sharing between scripts and public functions calling

DataShare - Simple library for data sharing between scripts and public functions calling. Installation. Install code, Delete LICENSE, README, readme.t

Ivan Perzhinsky. 1 Dec 17, 2021
BaseSpec is a system that performs a comparative analysis of baseband implementation and the specifications of cellular networks.

BaseSpec is a system that performs a comparative analysis of baseband implementation and the specifications of cellular networks. The key intuition of BaseSpec is that a message decoder in baseband software embeds the protocol specification in a machine-friendly structure to parse incoming messages;

SysSec Lab 35 Dec 6, 2022
Tool for ROS 2 IP Discovery + System Monitoring

Monitor the status of computers on a network using the DDS function of ROS2.

Ar-Ray 33 Apr 3, 2022
IoT owl is light face detection and recognition system made for small IoT devices like raspberry pi.

IoT Owl IoT owl is light face detection and recognition system made for small IoT devices like raspberry pi. Versions Heavy with mask detection withou

Ret2Me 6 Jun 6, 2022