Write python locally, execute SQL in your data warehouse

Overview

Downloads PyPI version Docs Chat on Slack License: AGPL v3

RasgoQL Hero

RasgoQL

Write python locally, execute SQL in your data warehouse
≪ Read the Docs   ·   Join Our Slack »

RasgoQL is a Python package that enables you to easily query and transform tables in your Data Warehouse directly from a notebook.

You can quickly create new features, sample data, apply complex aggregates... all without having to write SQL!

Choose from our library of predefined transformations or make your own to streamline the feature engineering process.

RasgoQL 30-second demo

Why is this package useful?

Data scientists spend much of their time in pandas preparing data for modelling. When they are ready to deploy or scale, two pain points arise:

  1. pandas cannot handle larger volumes of data, forcing the use of VMs or code refactoring.
  2. feature data must be added to the Enterprise Data Warehouse for future processing, requiring refactoring to SQL

We created RasgoQL to solve these two pain points.

Learn more at https://docs.rasgoql.com.

How does it work?

Under the covers, RasgoQL sends all processing to your Data Warehouse, enabling the efficient transformation of massive datasets. RasgoQL only needs basic metadata to execute transforms, so your private data remains secure.

RasgoQL workflow diagram

RasgoQL does these things well:

  • Pulls existing Data Warehouse tables into pandas DataFrames for analysis
  • Constructs SQL queries using a syntax that feels like pandas
  • Creates views in your Data Warehouse to save transformed data
  • Exports runnable sql in .sql files or dbt-compliant .yaml files
  • Offers dozens of free SQL transforms to use
  • Coming Soon: allows users to create & add custom transforms

Rasgo supports Snowflake, BigQuery, Postgres, and Amazon Redshift with more Data Warehouses being added soon. If you'd like to suggest another database type, submit your idea to our GitHub Discussions page so that other community members can weight in and show their support.

Can RasgoQL help you?

  • If you use pandas to build features, but you are working on a massive set of data that won't fit in your machine's memory. RasgoQL can help!

  • If your organization uses dbt or another SQL tool to run production data flows, but you prefer to build features in pandas. RasgoQL can help!

  • If you know pandas, but not SQL and want to learn how queries will translate. RasgoQL can help!

Where to get it

Just run a simple pip install.

pip install rasgoql~=1.0

Report Bug · Suggest Improvement · Request Feature

Quick Start

pip install rasgoql --upgrade

# Connect to your data warehouse
creds = rasgoql.SnowflakeCredentials(
    account="",
    user="",
    password="",
    role="",
    warehouse="",
    database="",
    schema=""
)

# Connect to DW
rql = rasgoql.connect(creds)

# List available tables
rql.list_tables('ADVENTUREWORKS').head(10)

# Allow rasgoQL to interact with an existing Table in your Data Warehouse
dataset = rql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

# Take a peek at the data
dataset.preview()

# Use the datetrunc transform to seperate things into weeks
weekly_sales = dataset.datetrunc(dates={'ORDERDATE':'week'})

# Aggregate to sum of sales for each week
agg_weekly_sales = weekly_sales.aggregate(
    group_by=['PRODUCTKEY', 'ORDERDATE_WEEK'],
    aggregations={'SALESAMOUNT': ['SUM']},
    )

# Quickly validate output
agg_weekly_sales.to_df()

# Print the SQL
print(agg_weekly_sales.sql())

Getting Stared Tutorials

The best way to get familiar with the RasgoQL basics is by running through these notebooks in the tutorials folder.

Advanced Examples

Joins

Easily join tables together using the join transform.

sales_dataset = rasgoql.dataset('ADVENTUREWORKS.PUBLIC.FACTINTERNETSALES')

sales_product_dataset = sales_dataset.join(
  join_table='DIM_PRODUCT',
  join_columns={'PRODUCTKEY': 'PRODUCTKEY'},
  join_type='LEFT',
  join_prefix='PRODUCT')

sales_product_dataset.sql()
sales_product_dataset.preview()

Rasgo Join Example

Chain transforms together

Create a rolling average aggregation and then drops unnecessary colomns.

sales_agg_drop = sales_dataset.rolling_agg(
    aggregations={"SALESAMOUNT": ["MAX", "MIN", "SUM"]},
    order_by="ORDERDATE",
    offsets=[-7, 7],
    group_by=["PRODUCTKEY"],
).drop_columns(exclude_cols=["ORDERDATEKEY"])

sales_agg_drop.sql()
sales_agg_drop.preview()

Multiple rasgoql transforms

Transpose unique values with pivots

Quickly generate pivot tables of your data.

sales_by_product = sales_dataset.pivot(
    dimensions=['ORDERDATE'],
    pivot_column='SALESAMOUNT',
    value_column='PRODUCTKEY',
    agg_method='SUM',
    list_of_vals=['310', '345'],
)

sales_by_product.sql()
sales_by_product.preview()

Rasgoql pivot example

Does any of my data get collected?

Rasgo will not collect any personal information. We log execution of methods in transforms.py for success and failure so that we can more accurately track what's useful and what's problematic.

Where do I go for help?

If you have any questions please:

  1. RasgoQL Docs
  2. Slack
  3. GitHub Issues

How can I contribute?

Review the contributors guide

License

RasgoQL uses the GNU AGPL license, as found in the LICENSE file.

This project is sponspored by RasgoML. Find out at https://www.rasgoml.com/

Comments
  • [BigQuery] fqtn is not valid if project name contains '-'

    [BigQuery] fqtn is not valid if project name contains '-'

    Hello,

    The fqtn of the table I want to get is following this pattern : my-awesome-project.schema.table. I tried to get it using rql.dataset(fqtn="my-awesome-project.schema.table") but I get a [ValueError: my-awesome-project.schema.table is not a well-formed fqtn](). It seems that the validate_fqtn() function is applying this regex \w+\.\w+\.\w+ that isn't accepting my GCP project name pattern. Is there a way to make this work without changing my GCP project name ?

    Thank you for this awesome package, I can't wait to try it ! ❤️ 🚀

    bug 
    opened by amirbtb 10
  • [BigQuery] `to_dbt()` raises IndexError

    [BigQuery] `to_dbt()` raises IndexError

    Hi,

    I am trying to generate dbt files (.sql & .yml) from a SQLChain using to_dbt(). The source table is a regular table. I'm using rasgoql 1.0.2a2.

    Here is my code, I'm just trying to generate base sql code for casting the table :

    import rasgoql
    from rasgoql import BigQueryCredentials
    
    PROJECT = "my-project"
    DATASET = "dataset"
    
    creds = BigQueryCredentials(
        json_filepath="/credentials/path",
        project=PROJECT,
        dataset=DATASET
    )
    rql = rasgoql.connect(creds)
    
    ds = rql.dataset(fqtn=f"{PROJECT}.{DATASET}.table")
    
    schema_dict = {column:data_type for column, data_type in ds.get_schema()}
    schema_dict
    
    ds_casted = ds.transform(
      transform_name='cast',
      casts=schema_dict
    )
    
    ds_casted.to_dbt('./test')
    
    

    I get the following error :

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    /src/test.ipynb Cell [1]
    ----> 1[ ds_casted.to_dbt('./test')
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py:40, in beta.<locals>.wrapper(*args, **kwargs)
         ]()[30](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=29)[ @functools.wraps(func)
         ]()[31](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=30)[ def wrapper(*args, **kwargs):
         ]()[32](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=31)[     logger.info(
         ]()[33](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=32)[         f'{func.__name__} is a beta feature. '
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=33)[         'Its functionality and parameters may change in future versions and '
       (...)
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=37)[         'or contact us directly on slack.'
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=38)[     )
    ---> ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/utils/decorators.py?line=39)[     return func(*args, **kwargs)
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:402, in SQLChain.to_dbt(self, output_directory, file_name, config_args, include_schema)
        ]()[394](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=393)[         chn_logger.warning(
        ]()[395](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=394)[             'Unexpected error generating the schema of this SQLChain. '
        ]()[396](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=395)[             'Your model.sql file will be generated without a schema.yml file. '
       (...)
        ]()[399](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=398)[             'your_chn.save() to update the view definition in your Data Warehouse.'
        ]()[400](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=399)[         )
        ]()[401](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=400)[     schema = []
    --> ]()[402](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=401)[ return create_dbt_files(
        ]()[403](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=402)[     self.transforms,
        ]()[404](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=403)[     schema,
        ]()[405](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=404)[     output_directory,
        ]()[406](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=405)[     file_name,
        ]()[407](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=406)[     config_args,
        ]()[408](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=407)[     include_schema
        ]()[409](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=408)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:105, in create_dbt_files(transforms, schema, output_directory, file_name, config_args, include_schema)
        ]()[102](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=101)[ output_directory = output_directory or os.getcwd()
        ]()[103](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=102)[ file_name = file_name or f'{transforms[-1].output_alias}.sql'
        ]()[104](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=103)[ return save_model_file(
    --> ]()[105](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=104)[     sql_definition=assemble_cte_chain(transforms),
        ]()[106](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=105)[     output_directory=output_directory,
        ]()[107](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=106)[     file_name=file_name,
        ]()[108](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=107)[     config_args=config_args,
        ]()[109](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=108)[     include_schema=include_schema,
        ]()[110](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=109)[     schema=schema
        ]()[111](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=110)[ )
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:36, in assemble_cte_chain(transforms, table_type)
         ]()[34](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=33)[     t = transforms[0]
         ]()[35](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=34)[     create_stmt = _set_create_statement(table_type, t.fqtn)
    ---> ]()[36](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=35)[     final_select = generate_transform_sql(
         ]()[37](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=36)[         t.name,
         ]()[38](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=37)[         t.arguments,
         ]()[39](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=38)[         t.source_table,
         ]()[40](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=39)[         None,
         ]()[41](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=40)[         t._dw
         ]()[42](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=41)[     )
         ]()[43](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=42)[     return create_stmt + final_select
         ]()[45](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=44)[ # Handle multi-transform chains
    
    File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py:124, in generate_transform_sql(name, arguments, source_table, running_sql, dw)
        ]()[120](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=119)[ """
        ]()[121](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=120)[ Returns the SQL for a Transform with applied arguments
        ]()[122](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=121)[ """
        ]()[123](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=122)[ templates = rtx.serve_rasgo_transform_templates(dw.dw_type)
    --> ]()[124](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=123)[ udt: 'TransformTemplate' = [t for t in templates if t.name == name][0]
        ]()[125](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=124)[ if not udt:
        ]()[126](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/rendering.py?line=125)[     raise TransformRenderingError(f'Cannot find a transform named {name}')
    
    IndexError: list index out of range]()
    
    bug 
    opened by amirbtb 9
  • Support upstream snowflake connector

    Support upstream snowflake connector

    Is your feature request related to a problem? Please describe. Need more of the snowflake connection options that are defined here https://github.com/snowflakedb/snowflake-connector-python/blob/main/src/snowflake/connector/connection.py#L112

    Describe the solution you'd like The ability to directly use the snowflake connector, or all of its options

    enhancement 
    opened by pbarker 6
  • `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns  | BigQuery

    `ds.transform(name='cast',casts=cast_dict)` creates duplicate columns | BigQuery

    Hi,

    The preview of ds.transform(name='cast', casts=cast_dict) shows a dataset with both old and new columns (casted). I gave a look at cast.sql and I see that it starts with a SELECT *. Suggestion : I believe the ds.transform(name='cast', casts=cast_dict) should be able to cast the provided columns, while keeping the other ones.

    Thank you 🙏

    enhancement 
    opened by amirbtb 6
  • Support fetching batches from Snowflake

    Support fetching batches from Snowflake

    Is your feature request related to a problem? Please describe. Hey, love the tool, I am loading large datasets that won't fit into memory

    Describe the solution you'd like Would like to use https://docs.snowflake.com/en/user-guide/python-connector-api.html#fetch_pandas_batches

    enhancement 
    opened by pbarker 5
  • `to_dbt()` creates a view in the schema of the source table  | BigQuery

    `to_dbt()` creates a view in the schema of the source table | BigQuery

    Hi,

    After I run to_dbt(), I noticed that a view is created in the BigQuery schema where the source table (normal table) is located. The view has the same name as the .sql file created in the output path that I provided in to_dbt() and its query is similar to the content of the .sql file outputted by to_dbt() . The ability to quickly create a view based on all the transformations performed via RasgoQL is very useful but I'm not sure if it should be a default output of to_dbt().

    Again, thank you for your work ! 🙏🏽

    bug 
    opened by amirbtb 5
  • Trouble with import

    Trouble with import

    Describe the bug I get this error when I try to pip install rasgoql[snowflake]: no matches found: rasgoql[snowflake]

    Prior to running this, I successfully downloaded the snowflake connector...

    To Reproduce go to bash and type: pip install rasgoql[snowflake]

    Expected Behavior: successful import

    Actual Behavior: bash returns: no matches found: rasgoql[snowflake]

    Version Information (please complete the following information): rasgoql==1.1.1 rasgotransforms==1.1.3

    Additional context I'm trying to connect to a trial snowflake account

    opened by mashhype 3
  • `ds.concat`doesn't accept the `name` argument | BigQuery

    `ds.concat`doesn't accept the `name` argument | BigQuery

    Hello,

    I tried to use the ds.conct() method as shown in the Example of the documentation. ds.concat(concat_list=['first_column',"'-'",'second_column'], name="both_columns") returns the following error :

    [File /usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py:54, in TransformableClass._create_aliased_function.<locals>.f(*arg, **kwargs)
         ]()[53](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=52)[ def f(*arg, **kwargs) -> 'SQLChain':
    ---> ]()[54](file:///usr/local/lib/python3.8/dist-packages/rasgoql/primitives/transforms.py?line=53)[     return self.transform(name=transform.name, *arg, **kwargs)
    
    TypeError: transform() got multiple values for keyword argument 'name']()
    

    When I don't provide the name argument, the functions works.

    Thanks again !

    bug 
    opened by amirbtb 3
  • New DB Request | BigQuery

    New DB Request | BigQuery

    Would like to use RasgoQL with BigQuery

    Open questions:

    • will transform templates need changes to support BigQuery SQL syntax?
    • may need to support google oauth login since creds are typically tired to a google account
    enhancement 
    opened by cpdough 3
  • Nest Transform Arguments

    Nest Transform Arguments

    This PR allows a transform to accept a Dataset or SQLChain as an input argument. The new logic flattens the primitive to either a fqtn or a CTE wrapped in parentheses and nests it in the running CTE. I don't know why this works, but 10 Budweisers can't be wrong. JK! It was 11.

    opened by griffatrasgo 1
  • #47 RAS-2651 Adding Amazon Redshift

    #47 RAS-2651 Adding Amazon Redshift

    Adding Amazon Redshift support.

    Test file attached here _test_demo_redshift.zip

    Need the following environment variables:

    REDSHIFT_USER="<dbuser>"
    REDSHIFT_PASSWORD="<dbpass>"
    REDSHIFT_DATABASE="dev"
    REDSHIFT_SCHEMA="public"
    REDSHIFT_HOST="<cluster-host>"
    REDSHIFT_PORT=5439
    REDSHIFT_DB_USER="<dbuser>"
    
    opened by ChrisGriffithRASGO 1
  • Document connecting to data warehouses using dictionary args

    Document connecting to data warehouses using dictionary args

    What feature are you requesting?

    There aren't any docs on connecting to a data warehouse using a dictionary. The current credential classes are limited and gave the impression I wouldn't be able to connect to my warehouse

    Are you using a workaround to do it in or outside of the product today?

    Read the code and figured it out

    How important is this feature to your continued use of the package? Can you qualify the value / importance of this feature in any way?

    I think this is pretty important since it gave me the impression I couldn't use this product

    enhancement 
    opened by pbarker 2
Releases(1.6.4)
  • 1.6.4(Jul 5, 2022)

    Version 1.6.4 - 2022-07-05

    Changed

    • Changed default behavior of to_dbt function. Instead of always appending model details to the schema.yml file (which creates duplicate entries for existing models), rql will now check if a model entry already exists in the file and overwrite it. If the model does not exist, it will be appended.
    Source code(tar.gz)
    Source code(zip)
  • 1.6.3(Jun 28, 2022)

  • 1.6.2(Jun 27, 2022)

  • 1.6.1(Jun 27, 2022)

    Version 1.6.1 - 2022-06-27

    Fixed

    • Fixed a bug in the get_schema method of SQLAlchemy DW classes where users were being asked to enter an overwrite param they cannot access
    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(Jun 23, 2022)

    Version 1.6.0 - 2022-06-23

    Changed

    • Changed the get_schema method on all DW classes to accept a single fqtn_or_sql variable
    • Changed the behavior of transform arguments: when a Dataset or SQLChain class is passed in as an argument to a transform, it is automatically flattened to its corresponding fqtn or CTE then consumed in the transform.
    Source code(tar.gz)
    Source code(zip)
  • 1.5.6(Jun 21, 2022)

    Version 1.5.6 - 2022-06-20

    Changed

    • Changed the get_schema method on Snowflake and BigQuery DW classes to get output columns without creating views
    Source code(tar.gz)
    Source code(zip)
  • 1.5.5(Jun 7, 2022)

Streamlit dashboard examples - Twitter cashtags, StockTwits, WSB, Charts, SQL Pattern Scanner

streamlit-dashboards Streamlit dashboard examples - Twitter cashtags, StockTwits, WSB, Charts, SQL Pattern Scanner Tutorial Video https://ww

null 122 Dec 21, 2022
BrowZen correlates your emotional states with the web sites you visit to give you actionable insights about how you spend your time browsing the web.

BrowZen BrowZen correlates your emotional states with the web sites you visit to give you actionable insights about how you spend your time browsing t

Nick Bild 36 Sep 28, 2022
Create artistic visualisations with your exercise data (Python version)

strava_py Create artistic visualisations with your exercise data (Python version). This is a port of the R strava package to Python. Examples Facets A

Marcus Volz 53 Dec 28, 2022
With Holoviews, your data visualizes itself.

HoloViews Stop plotting your data - annotate your data and let it visualize itself. HoloViews is an open-source Python library designed to make data a

HoloViz 2.3k Jan 2, 2023
With Holoviews, your data visualizes itself.

HoloViews Stop plotting your data - annotate your data and let it visualize itself. HoloViews is an open-source Python library designed to make data a

HoloViz 1.8k Feb 17, 2021
With Holoviews, your data visualizes itself.

HoloViews Stop plotting your data - annotate your data and let it visualize itself. HoloViews is an open-source Python library designed to make data a

HoloViz 2.3k Jan 4, 2023
Show Data: Show your dataset in web browser!

Show Data is to generate html tables for large scale image dataset, especially for the dataset in remote server. It provides some useful commond line tools and fully customizeble API reference to generate html table different tasks.

Dechao Meng 83 Nov 26, 2022
Extract data from ThousandEyes REST API and visualize it on your customized Grafana Dashboard.

ThousandEyes Grafana Dashboard Extract data from the ThousandEyes REST API and visualize it on your customized Grafana Dashboard. Deploy Grafana, Infl

Flo Pachinger 16 Nov 26, 2022
Visualize your pandas data with one-line code

PandasEcharts 简介 基于pandas和pyecharts的可视化工具 安装 pip 安装 $ pip install pandasecharts 源码安装 $ git clone https://github.com/gamersover/pandasecharts $ cd pand

陈华杰 2 Apr 13, 2022
Apache Superset is a Data Visualization and Data Exploration Platform

Superset A modern, enterprise-ready business intelligence web application. Why Superset? | Supported Databases | Installation and Configuration | Rele

The Apache Software Foundation 50k Jan 6, 2023
Apache Superset is a Data Visualization and Data Exploration Platform

Apache Superset is a Data Visualization and Data Exploration Platform

The Apache Software Foundation 49.9k Jan 2, 2023
Script to create an animated data visualisation for categorical timeseries data - GIF choropleth map with annotations.

choropleth_ldn Simple script to create a chloropleth map of London with categorical timeseries data. The script in main.py creates a gif of the most f

null 1 Oct 7, 2021
Tidy data structures, summaries, and visualisations for missing data

naniar naniar provides principled, tidy ways to summarise, visualise, and manipulate missing data with minimal deviations from the workflows in ggplot

Nicholas Tierney 611 Dec 22, 2022
Automatic data visualization in atom with the nteract data-explorer

Data Explorer Interactively explore your data directly in atom with hydrogen! The nteract data-explorer provides automatic data visualization, so you

Ben Russert 65 Dec 1, 2022
Data-FX is an addon for Blender (2.9) that allows for the visualization of data with different charts

Data-FX Data-FX is an addon for Blender (2.9) that allows for the visualization of data with different charts Currently, there are only 2 chart option

Landon Ferguson 20 Nov 21, 2022
Collection of data visualizing projects through Tableau, Data Wrapper, and Power BI

Data-Visualization-Projects Collection of data visualizing projects through Tableau, Data Wrapper, and Power BI Indigenous-Brands-Social-Movements Pyt

Jinwoo(Roy) Yoon 1 Feb 5, 2022
A simple python tool for explore your object detection dataset

A simple tool for explore your object detection dataset. The goal of this library is to provide simple and intuitive visualizations from your dataset and automatically find the best parameters for generating a specific grid of anchors that can fit you data characteristics

GRADIANT - Centro Tecnolóxico de Telecomunicacións de Galicia 142 Dec 25, 2022
termplotlib is a Python library for all your terminal plotting needs.

termplotlib termplotlib is a Python library for all your terminal plotting needs. It aims to work like matplotlib. Line plots For line plots, termplot

Nico Schlömer 553 Dec 30, 2022
Profile and test to gain insights into the performance of your beautiful Python code

Profile and test to gain insights into the performance of your beautiful Python code View Demo - Report Bug - Request Feature QuickPotato in a nutshel

Joey Hendricks 138 Dec 6, 2022