The Timescale NFT Starter Kit is a step-by-step guide to get up and running with collecting, storing, analyzing and visualizing NFT data from OpenSea, using PostgreSQL and TimescaleDB.

Overview

Timescale NFT Starter Kit

The Timescale NFT Starter Kit is a step-by-step guide to get up and running with collecting, storing, analyzing and visualizing NFT data from OpenSea, using PostgreSQL and TimescaleDB.

The NFT Starter Kit will give you a foundation for analyzing NFT trends so that you can bring some data to your purchasing decisions, or just learn about the NFT space from a data-driven perspective. It also serves as a solid foundation for your more complex NFT analysis projects in the future.

We recommend following along with the NFT Starter Kit tutorial to get familar with the contents of this repository.

For more information about the NFT Starter Kit, see the announcement blog post.

Project components

Earn a Time Travel Tiger NFT

Time Travel Tigers is a collection of 20 hand-crafted NFTs featuring Timescale’s mascot: Eon the friendly tiger, as they travel through space and time, spreading the word about time-series data wearing various disguises to blend in. The first 20 people to complete the NFT Starter Kit tutorial can earn a limited edition NFT from the collection, for free! Simply download the NFT Starter Kit, complete the tutorial and fill out this form, and we’ll send one of the limited-edition Eon NFTs to your ETH address (at no cost to you!).

Get started

Clone the nft-starter-kit repository:

git clone https://github.com/timescale/nft-starter-kit.git
cd nft-starter-kit

Setting up the pre-built Superset dashboards

This part of the project is fully Dockerized. TimescaleDB and the Superset dashboard is built out automatically using docker-compose. After completing the steps below, you will have a local TimescaleDB and Superset instance running in containers - containing 500K+ NFT transactions from OpenSea.

The Docker service uses port 8088 (for Superset) and 6543 (for TimescaleDB) so make sure there's no other services using those ports before starting the installation process.

Prerequisites

  • Docker

  • Docker compose

    Verify that both are installed:

    docker --version && docker-compose --version

Instructions

  1. Run docker-compose up --build in the /pre-built-dashboards folder:

    cd pre-built-dashboards
    docker-compose up --build

    See when the process is done (it could take a couple of minutes):

    timescaledb_1      | PostgreSQL init process complete; ready for start up.
  2. Go to http://0.0.0.0:8088/ in your browser and login with these credentials:

    user: admin
    password: admin
    
  3. Open the Databases page inside Superset (http://0.0.0.0:8088/databaseview/list/). You will see exactly one item there called NFT Starter Kit.

  4. Click the edit button (pencil icon) on the right side of the table (under "Actions").

  5. Don't change anything in the popup window, just click Finish. This will make sure the database can be reached from Superset.

  6. Go check out your NFT dashboards!

    Collections dashboard: http://0.0.0.0:8088/superset/dashboard/1

    Assets dashboard: http://0.0.0.0:8088/superset/dashboard/2

Running the data ingestion script

If you'd like to ingest data into your database (be it a local TimescaleDB, or in Timescale Cloud) straight from the OpenSea API, follow these steps to configure the ingestion script:

Prerequisites

Instructions

  1. Go to the root folder of the project:
    cd nft-starter-kit
  2. Create a new Python virtual environment and install the requirements:
    virtualenv env && source env/bin/activate
    pip install -r requirements.txt
  3. Replace the parameters in the config.py file:
    DB_NAME="tsdb"
    HOST="YOUR_HOST_URL"
    USER="tsdbadmin"
    PASS="YOUR_PASSWORD_HERE"
    PORT="PORT_NUMBER"
    OPENSEA_START_DATE="2021-10-01T00:00:00" # example start date (UTC)
    OPENSEA_END_DATE="2021-10-06T23:59:59" # example end date (UTC)
  4. Run the Python script:
    python opensea_ingest.py
    This will start ingesting data in batches, ~300 rows at a time:
    Start ingesting data between 2021-10-01 00:00:00+00:00 and 2021-10-06 23:59:59+00:00
    ---
    Fetching transactions from OpenSea...
    Data loaded into temp table!
    Data ingested!
    Data has been backfilled until this time: 2021-10-06 23:51:31.140126+00:00
    ---
    You can stop the ingesting process anytime (Ctrl+C), otherwise the script will run until all the transactions have been ingested from the given time period.

Ingest the sample data

If you don't want to spend time waiting until a decent amount of data is ingested, you can just use our sample dataset which contains 500K+ sale transactions from OpenSea (this sample was used for the Superset dashboard as well)

Prerequisites

Instructions

  1. Go to the folder with the sample CSV files (or you can also download them from here):
    cd pre-built-dashboards/database/data
  2. Connect to your database with PSQL:
    psql -x "postgres://host:port/tsdb?sslmode=require"
    If you're using Timescale Cloud, the instructions under How to Connect provide a customized command to run to connect directly to your database.
  3. Import the CSV files in this order (it can take a few minutes in total):
    \copy accounts FROM 001_accounts.csv CSV HEADER;
    \copy collections FROM 002_collections.csv CSV HEADER;
    \copy assets FROM 003_assets.csv CSV HEADER;
    \copy nft_sales FROM 004_nft_sales.csv CSV HEADER;
  4. Try running some queries on your database:
    SELECT count(*), MIN(time) AS min_date, MAX(time) AS max_date FROM nft_sales 
Issues
  • Data ingestion script gives KeyError

    Data ingestion script gives KeyError

    Hi there!

    I have done all the setup for this project (installed postgreSQL & timescaleDB, connected to timescaleDB cloud instance, and set up config.py). However, upon execution I got a KeyError:

    nft-scraper-issue

    Do you know what went wrong and how I could fix this?

    Thanks!

    opened by hwixley 2
  • Bump streamlit from 1.5.0 to 1.11.1 in /pre-built-dashboards/streamlit

    Bump streamlit from 1.5.0 to 1.11.1 in /pre-built-dashboards/streamlit

    Bumps streamlit from 1.5.0 to 1.11.1.

    Release notes

    Sourced from streamlit's releases.

    1.11.1

    No release notes provided.

    1.11.0

    No release notes provided.

    1.10.0

    No release notes provided.

    1.9.2

    No release notes provided.

    1.9.1

    No release notes provided.

    1.9.0

    No release notes provided.

    1.8.1

    No release notes provided.

    1.8.0

    No release notes provided.

    1.7.0

    • ❄️ Add st.snow()!

    1.6.0

    • 🗜 WebSocket compression is now disabled by default, which will improve CPU and latency performance for large dataframes. You can use the server.enableWebsocketCompression  configuration option to re-enable it if you find the increased network traffic more impactful.
    • ☑️ 🔘 Radio and checkboxes improve focus on Keyboard navigation (#4308)

    1.5.1

    No release notes provided.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Request blocked

    Request blocked

    Followed the instructions, got the db connected. When running opensea_ingest.py i'm seeing

    Fetching transactions from OpenSea...
    None
    Request blocked... retrying...
    
    

    outputted in console... I realized now I dont have an OpenSea API key. Is this why?

    opened by alimcharaniya 1
  • Fix ingesting script

    Fix ingesting script

    • Cleaning up the ingestion script
    • Using the opensea-api package for better maintainability
    • Implementing a better way to /paginate the events endpoint using the new OpenSea cursor-based pagination
    • Fixing OpenSea issues
    opened by zseta 0
  • Add apikey field to config.py

    Add apikey field to config.py

    This is a workaround for those who experience blocks from OpenSea API. You can request an API key from here: https://docs.opensea.io/reference/request-an-api-key

    opened by zseta 0
  • Error loading some charts/data

    Error loading some charts/data

    Hello and thank you for the amazing work you have put together.

    I'm currently having some issues with some charts not showing and seeing some errors in the terminal.

    nft-starter-kit_1 | The field timeseries_limit is deprecated, please use series_limit instead. nft-starter-kit_1 | 2022-01-13 16:09:12,392:WARNING:superset.common.query_object:The field timeseries_limit is deprecated, please use series_limit instead. nft-starter-kit_1 | 2022-01-13 16:09:13,335:INFO:werkzeug:172.20.0.1 - - [13/Jan/2022 16:09:13] "POST /superset/log/?explode=events&dashboard_id=2 HTTP/1.1" 200 - nft-starter-kit_1 | Query SELECT bucket AS __timestamp, nft-starter-kit_1 | asset_id AS asset_id, nft-starter-kit_1 | asset_name AS asset_name, nft-starter-kit_1 | max(volume_eth) AS "MAX(volume_eth)" nft-starter-kit_1 | FROM public.superset_assets_daily nft-starter-kit_1 | GROUP BY asset_id, nft-starter-kit_1 | asset_name, nft-starter-kit_1 | bucket nft-starter-kit_1 | ORDER BY "MAX(volume_eth)" DESC nft-starter-kit_1 | LIMIT 50000 on schema public failed nft-starter-kit_1 | Traceback (most recent call last): nft-starter-kit_1 | File "/app/superset/connectors/sqla/models.py", line 1601, in query nft-starter-kit_1 | df = self.database.get_df(sql, self.schema, mutator=assign_column_label) nft-starter-kit_1 | File "/app/superset/models/core.py", line 431, in get_df nft-starter-kit_1 | df = result_set.to_pandas_df() nft-starter-kit_1 | File "/app/superset/result_set.py", line 202, in to_pandas_df nft-starter-kit_1 | return self.convert_table_to_df(self.table) nft-starter-kit_1 | File "/app/superset/result_set.py", line 177, in convert_table_to_df nft-starter-kit_1 | return table.to_pandas(integer_object_nulls=True) nft-starter-kit_1 | File "pyarrow/array.pxi", line 757, in pyarrow.lib._PandasConvertible.to_pandas nft-starter-kit_1 | File "pyarrow/table.pxi", line 1748, in pyarrow.lib.Table._to_pandas nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 789, in table_to_blockmanager nft-starter-kit_1 | blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1130, in _table_to_blocks nft-starter-kit_1 | return [_reconstruct_block(item, columns, extension_columns) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1130, in nft-starter-kit_1 | return [_reconstruct_block(item, columns, extension_columns) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 733, in _reconstruct_block nft-starter-kit_1 | dtype = make_datetimetz(item['timezone']) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 758, in make_datetimetz nft-starter-kit_1 | tz = pa.lib.string_to_tzinfo(tz) nft-starter-kit_1 | File "pyarrow/types.pxi", line 1927, in pyarrow.lib.string_to_tzinfo nft-starter-kit_1 | File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pytz/init.py", line 188, in timezone nft-starter-kit_1 | raise UnknownTimeZoneError(zone) nft-starter-kit_1 | pytz.exceptions.UnknownTimeZoneError: '+00' nft-starter-kit_1 | 2022-01-13 16:09:14,511:WARNING:superset.connectors.sqla.models:Query SELECT bucket AS __timestamp, nft-starter-kit_1 | asset_id AS asset_id, nft-starter-kit_1 | asset_name AS asset_name, nft-starter-kit_1 | max(volume_eth) AS "MAX(volume_eth)" nft-starter-kit_1 | FROM public.superset_assets_daily nft-starter-kit_1 | GROUP BY asset_id, nft-starter-kit_1 | asset_name, nft-starter-kit_1 | bucket nft-starter-kit_1 | ORDER BY "MAX(volume_eth)" DESC nft-starter-kit_1 | LIMIT 50000 on schema public failed nft-starter-kit_1 | Traceback (most recent call last): nft-starter-kit_1 | File "/app/superset/connectors/sqla/models.py", line 1601, in query nft-starter-kit_1 | df = self.database.get_df(sql, self.schema, mutator=assign_column_label) nft-starter-kit_1 | File "/app/superset/models/core.py", line 431, in get_df nft-starter-kit_1 | df = result_set.to_pandas_df() nft-starter-kit_1 | File "/app/superset/result_set.py", line 202, in to_pandas_df nft-starter-kit_1 | return self.convert_table_to_df(self.table) nft-starter-kit_1 | File "/app/superset/result_set.py", line 177, in convert_table_to_df nft-starter-kit_1 | return table.to_pandas(integer_object_nulls=True) nft-starter-kit_1 | File "pyarrow/array.pxi", line 757, in pyarrow.lib._PandasConvertible.to_pandas nft-starter-kit_1 | File "pyarrow/table.pxi", line 1748, in pyarrow.lib.Table._to_pandas nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 789, in table_to_blockmanager nft-starter-kit_1 | blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1130, in _table_to_blocks nft-starter-kit_1 | return [_reconstruct_block(item, columns, extension_columns) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 1130, in nft-starter-kit_1 | return [_reconstruct_block(item, columns, extension_columns) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 733, in _reconstruct_block nft-starter-kit_1 | dtype = make_datetimetz(item['timezone']) nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pyarrow/pandas_compat.py", line 758, in make_datetimetz nft-starter-kit_1 | tz = pa.lib.string_to_tzinfo(tz) nft-starter-kit_1 | File "pyarrow/types.pxi", line 1927, in pyarrow.lib.string_to_tzinfo nft-starter-kit_1 | File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status nft-starter-kit_1 | File "/usr/local/lib/python3.8/site-packages/pytz/init.py", line 188, in timezone nft-starter-kit_1 | raise UnknownTimeZoneError(zone) nft-starter-kit_1 | pytz.exceptions.UnknownTimeZoneError: '+00' nft-starter-kit_1 | 2022-01-13 16:09:14,530:INFO:werkzeug:172.20.0.1 - - [13/Jan/2022 16:09:14] "POST /api/v1/chart/data?form_data=%7B%22slice_id%22%3A10%7D&dashboard_id=2&force=true HTTP/1.1" 400 -

    Screen Shot 2022-01-13 at 11 12 23 AM Screen Shot 2022-01-13 at 11 13 05 AM
    opened by fpena06 24
  • Error loading chart datasources. Filters may not work correctly.

    Error loading chart datasources. Filters may not work correctly.

    Hi.

    I'm new and starting to set up the nft-starter-kit. After following all steps of the tutorial im heading towards the dashboard and my data isn't loading. Im having the following error "Error loading chart datasources. Filters may not work correctly." I haven't set any filters so I think thats not the problem.

    Thanks,

    mmm

    Captura de Pantalla 2021-11-21 a la(s) 11 39 19
    opened by mchamyan 3
Collection of data visualizing projects through Tableau, Data Wrapper, and Power BI

Data-Visualization-Projects Collection of data visualizing projects through Tableau, Data Wrapper, and Power BI Indigenous-Brands-Social-Movements Pyt

Jinwoo(Roy) Yoon 1 Feb 5, 2022
Tools for calculating and visualizing Elo-like ratings of MLB teams using Retosheet data

Overview This project uses historical baseball games data to calculate an Elo-like rating for MLB teams based on regular season match ups. The Elo rat

Lukas Owens 0 Aug 25, 2021
Keir&'s Visualizing Data on Life Expectancy

Keir's Visualizing Data on Life Expectancy Below is information on life expectancy in the United States from 1900-2017. You will also find information

null 9 Jun 6, 2022
Leyna's Visualizing Data With Python

Leyna's Visualizing Data Below is information on the number of bilingual students in three school districts in Massachusetts. You will also find infor

null 11 Oct 28, 2021
A command line tool for visualizing CSV/spreadsheet-like data

PerfPlotter Read data from CSV files using pandas and generate interactive plots using bokeh, which can then be embedded into HTML pages and served by

Gino Mempin 0 Jun 25, 2022
A python-generated website for visualizing the novel coronavirus (COVID-19) data for Greece.

COVID-19-Greece A python-generated website for visualizing the novel coronavirus (COVID-19) data for Greece. Data sources Data provided by Johns Hopki

Isabelle Viktoria Maciohsek 21 May 31, 2022
Data Visualization Guide for Presentations, Reports, and Dashboards

This is a highly practical and example-based guide on visually representing data in reports and dashboards.

Anton Zhiyanov 377 Aug 4, 2022
Visualizing weather changes across the world using third party APIs and Python.

WEATHER FORECASTING ACROSS THE WORLD Overview Python scripts were created to visualize the weather for over 500 cities across the world at varying di

G Johnson 0 Jun 12, 2021
A guide for using Bootstrap 5 classes in Dash Bootstrap Components V1

dash-bootstrap-cheatsheet This handy interactive cheatsheet makes it easy to use the Bootstrap 5 classes with your Dash app made with the latest versi

null 6 Aug 12, 2022
eoplatform is a Python package that aims to simplify Remote Sensing Earth Observation by providing actionable information on a wide swath of RS platforms and provide a simple API for downloading and visualizing RS imagery

An Earth Observation Platform Earth Observation made easy. Report Bug | Request Feature About eoplatform is a Python package that aims to simplify Rem

Matthew Tralka 4 Aug 11, 2022
Pydrawer: The Python package for visualizing curves and linear transformations in a super simple way

pydrawer ?? The Python package for visualizing curves and linear transformations in a super simple way. ✏️ Installation Install pydrawer package with

Dylan Tintenfich 37 Jul 12, 2022
Curvipy - The Python package for visualizing curves and linear transformations in a super simple way

Curvipy - The Python package for visualizing curves and linear transformations in a super simple way

Dylan Tintenfich 37 Jul 12, 2022
HM02: Visualizing Interesting Datasets

HM02: Visualizing Interesting Datasets This is a homework assignment for CSCI 40 class at Claremont McKenna College. Go to the project page to learn m

Qiaoling Chen 11 Oct 26, 2021
HW 2: Visualizing interesting datasets

HW 2: Visualizing interesting datasets Check out the project instructions here! Mean Earnings per Hour for Males and Females My first graph uses data

null 7 Oct 27, 2021
Generate SVG (dark/light) images visualizing (private/public) GitHub repo statistics for profile/website.

Generate daily updated visualizations of GitHub user and repository statistics from the GitHub API using GitHub Actions for any combination of private and public repositories, whether owned or contributed to - no server required.

Adam Ross 2 Mar 20, 2022
Backend app for visualizing CANedge log files in Grafana (directly from local disk or S3)

CANedge Grafana Backend - Visualize CAN/LIN Data in Dashboards This project enables easy dashboard visualization of log files from the CANedge CAN/LIN

null 6 May 20, 2022
A Python-based non-fungible token (NFT) generator built using Samilla and Matplotlib

PyNFT A Pythonic NF (non-fungible token) generator built using Samilla and Matplotlib Use python pynft.py [amount] The intention behind this generato

Ayush Gundawar 6 Feb 7, 2022
Yata is a fast, simple and easy Data Visulaization tool, running on python dash

Yata is a fast, simple and easy Data Visulaization tool, running on python dash. The main goal of Yata is to provide a easy way for persons with little programming knowledge to visualize their data easily.

Cybercreek 3 Jun 28, 2021
Eulera Dashboard is an easy and intuitive way to get a quick feel of what’s happening on the world’s market.

an easy and intuitive way to get a quick feel of what’s happening on the world’s market ! Eulera dashboard is a tool allows you to monitor historical

Salah Eddine LABIAD 3 Dec 6, 2021