Python generation script for BitBirds

Overview

BitBirds generation script

Intro

This is published under MIT license, which means you can do whatever you want with it - entirely at your own risk.

Please don't be an asshole. This is, like, grassroots and stuff.

Specifically I'm asking you in good faith not to directly knock off the BitBirds project, or otherwise screw me over for sharing this. Do not use this for anything hateful or discriminatory.

There is a YouTube video walkthrough to complement this ReadMe ...Link....

Setting the expectations

If you're new to programming you may struggle to set up the dependencies. If you're persistent, you can do it! I believe in you.

Often in technology, setting up a pre-requisite like PIP (a python asset installation tool) isn't something the developer thinks about in a given project because it has been on their computer for months or years.

Even having set up a number of dependencies just a few weeks ago for this project, I don't remember exactly how I worked through the various error messages. When you try to run the script, if it fails there will often be some useful nugget of information buried in the cryptic response blob. As a rule for life - Google is your friend, and others have probably encountered your exact error message. When asking questions on discord, stackoverflow, or wherever, say very specifically (1) what you've tried (2) what you expect as the result and (3) what issue/error you're encountering. That'll get a lot more useful feedback than just shouting HALP.

Dependencies

The dependenices were all installed with the terminal/command line. There is documentation abound about terminal generally, and these tools specifically, but unfortunately I did not save copies of the web pages I used. From memory the things I needed to setup were:

  • Python 3 (default on my mac was python 2.7)

  • PIP - a command-line installation mechanism for python assets.

IIRC I needed to use some special command with python 3 to use pip as an installation mechanism for the items below- perhaps pip3 install ... rather than pip install ...?

  • Pillow - python library to generate images - installed via terminal/command-line with PIP

  • NumPy - python library to work with arrays - installed via terminal/command-line with PIP

  • I don't think I had to install the 'random' library (included in the script) to use the number-randomization feature, but might have.

If you encounter specific setup items I haven't mentioned here let me know, and I'll add them.

How this script works

The video I've put on YouTube complements this overview.

We are iterating through a 'loop' once for each bird. The loop starts with a 'seed number' that is used to deterministically generate pseudo-random numbers. I say 'deterministically' and 'pseudo-random' because from the same seed number the 'random' output will always be the same. It's not truly random in a security or mathematical sense. I used the most recent ETH block at the time as my seed number - 11981207.

There is then a 'chain' of additional random numbers generated that are used to define all of the various traits of the birds. Many of the attributes generate a random number between 1-1000 and use that for some sort of logical statement (e.g. to decide beak color).

Interestingly, the way I've used this random-number chain seems to have resulted in some specific behavior and combinations that I can't yet personally explain. For example the way the bird type selection random number and beak color selection random number chain from one another seems to have resulted in no red-beaked woodpeckers. I also noticed that all of the four cockatoos generated seemed to have the exact same blue crest. Because I wanted more variety than that, in the minted NFTs I ran another batch of cockatoos and replaced the second, third, and fourth cockatoos in the original sequence with others that provided more variety. I made no other changes to the randomly generated set of birds, and if you run the script yourself (without changing the seed) you should see identical matches for each number. If you spot a pattern in why the random numbers behave in this way, I would love to discuss on twitter or discord.

  • Head color and throat color are a random 1-255 number generated into each of the three RGB values in a color.
  • Eye color looks at a random number between 1-1000 and if it's 47 or less, will give the bird crazy eyes. Crazy eyes always have the same pupil color (154, 0, 0) and then generate a random color for the 'white of the eye,' in the same way the head and throat color are generated.
  • Beak color is determined by an evaluation of another 1-1000 'random' number. Grey is most common, gold is also common, red is rare, and black is very rare.

The bird images are 24x24 arrays of variables, representing every pixel in the final image. I've used variables with two letters for each type of pixel (e.g. outline ol, head color hd, beak color bk), so as to keep the 'matrix' of pixel variables easy to see and work with. If you zoom way out on the code you may even be able to see a rough picture of the birds in the code, just from the slight differences in the variables.

The script uses another 1-1000 'random' number to decide which of the bird type templates to use. Basic bird is most common, at about 75% probability. Jay has a 15% probability, woodpecker has 6%, eagle has 3.5%, and cockatoo has half a percent probability.

From there, you're just about home free. The final bit of the loop re-sizes the generated bird from 24x24 pixels up to 480x480 pixels. It generates the image file path (dynamically, wherever you have the folder using the os library), and saves the image into the included folder bird_images.

Then it goes right back to the top of the loop, and does it again for the next bird, until the number of defined loops is completed.

Wrap up

I sincerely hope this inspires someone to learn a new skill, take up coding, or generally expand their horizons! I won't profess to be a professional coder, but I am a technologist in my day job and have found it to be a fulfilling and rewarding life path. This BitBirds project has been a joy to be involved in. The community that has popped up around it already has been inspiring, and I'm excited to see it grow in the years to come.

If would like to show your thanks for this shared asset I'd encourage you to plant some trees! https://onetreeplanted.org/collections/all

If you feel absolutly compelled to send ETH or NFTs to me directly, please know that it is not necessary, but the BitBirds project hardware wallet address is: 0x1fd146a5e6152c5ACd3A013fBC42A243e4DfCe63

Thanks for everything!

You might also like...
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.

Summarization, translation, Q&A, text generation and more at blazing speed using a T5 version implemented in ONNX. This package is still in alpha stag

Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

To be a next-generation DL-based phenotype prediction from genome mutations.

Sequence -----------+-- 3D_structure -- 3D_module --+ +-- ? | |

TTS is a library for advanced Text-to-Speech generation.
TTS is a library for advanced Text-to-Speech generation.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

LightSeq: A High-Performance Inference Library for Sequence Processing and Generation
LightSeq: A High-Performance Inference Library for Sequence Processing and Generation

LightSeq is a high performance inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP models such as BERT, GPT2, Transformer, etc. It is therefore best useful for Machine Translation, Text Generation, Dialog, Language Modelling, and other related tasks using these models.

An implementation of WaveNet with fast generation

pytorch-wavenet This is an implementation of the WaveNet architecture, as described in the original paper. Features Automatic creation of a dataset (t

Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation.  This is part of the CASL project: http://casl-project.ai/
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar

PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.09

Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.

TRICE: a task-agnostic transferring framework for multi-source sequence generation This is the source code of our work Transfer Learning for Sequence

Comments
  • Adding an script to generate datasets

    Adding an script to generate datasets

    ABOUT

    • This script is used to generate a dataset of bitbirds images.
    • Initially there will be no CSV files. first run the script then CSV files will get generated.
    • you need to tell how many bitbirds images data should be in these csv files. each image will be in form of 24,24,3 and these csv files will store pixels values.
    • example - if we want 1 image in dataset so: - 'bitbirds_dataset' file will have 24 rows and 24 columns for this image and each cell will have 3 values for RGB. - 'bitbirds_target' file will have one row which is the name of bird
    • Another example - if we want 10 image in dataset so: - 'bitbirds_dataset' file will have 240 rows and 24 columns. and first 24 rows will belongs to first image and next 24 rows (i.e. 25 to 48) will belongs to second image and so on till 10th image. - 'bitbirds_target' file will have ten rows which are the names of respective birds of these 10 images.

    How To Run

    • open folder bitbirds_dataset_generator and open python script in any editor. Now Run this script.
    • it will ask for input. enter an number, this number denotes size of datasets or count of sample images.
    • Now open bitbirds_dataset and bitbirds_target .csv files this is data set which got generated.

    What is the use?

    • It will be helpful for People who want a bitBirds dataset for analysis.
    • People from different domain who want to work on supervised learning and Machine Leaning things can use this dataset.

    in response of open issue https://github.com/nft-fun/generate-bitbirds/issues/1

    opened by jainaagam96 2
Owner
BitBirds generation script!
null
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN

artificial intelligence cosmic love and attention fire in the sky a pyramid made of ice a lonely house in the woods marriage in the mountains lantern

Phil Wang 2.3k Jan 1, 2023
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。

GPT2-NewsTitle 带有超详细注释的GPT2新闻标题生成项目 UpDate 01.02.2021 从网上收集数据,将清华新闻数据、搜狗新闻数据等新闻数据集,以及开源的一些摘要数据进行整理清洗,构建一个较完善的中文摘要数据集。 数据集清洗时,仅进行了简单地规则清洗。

logCong 785 Dec 29, 2022
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 6.4k Jan 1, 2023
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 7, 2023
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 409 Oct 28, 2022
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.

Summarization, translation, Q&A, text generation and more at blazing speed using a T5 version implemented in ONNX. This package is still in alpha stag

Abel 211 Dec 28, 2022
GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training Code and model from our AAAI 2021 paper

Amazon Web Services - Labs 83 Jan 9, 2023
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 4.8k Feb 18, 2021
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.1k Feb 17, 2021
An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)

VizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization, image captioning, speech translation

Facebook Research 310 Feb 1, 2021