Generate lookml for views from dbt models

Overview

dbt2looker

Use dbt2looker to generate Looker view files automatically from dbt models.

Features

  • Column descriptions synced to looker
  • Dimension for each column in dbt model
  • Dimension groups for datetime/timestamp/date columns
  • Measures defined through dbt column metadata see below
  • Looker types
  • Warehouses: BigQuery, Snowflake, Redshift (postgres to come)

demo

Quickstart

Run dbt2looker in the root of your dbt project after compiling looker docs.

Generate Looker view files for all models:

dbt docs generate
dbt2looker

Generate Looker view files for all models tagged prod

dbt2looker --tag prod

Install

Install from PyPi repository

Install from pypi into a fresh virtual environment.

# Create virtual env
python3.7 -m venv dbt2looker-venv
source dbt2looker-venv/bin/activate

# Install
pip install dbt2looker

# Run
dbt2looker

Build from source

Requires poetry and python >=3.7

# Install
poetry install

# Run
poetry run dbt2looker

Defining measures

You can define looker measures in your dbt schema.yml files. For example:

models:
  - name: pages
    columns:
      - name: url
        description: "Page url"
      - name: event_id
        description: unique event id for page view
        meta:
           measures:
             page_views:
               type: count
Comments
  • Column Type None Error - Field's Not Converting To Dimensions

    Column Type None Error - Field's Not Converting To Dimensions

    When running dbt2looker --tag marts on my mart models, I receive dozens of errors around none type conversions.

    20:54:28 WARNING Column type None not supported for conversion from snowflake to looker. No dimension will be created.

    Here is the example of the schema.yml file.

    image

    The interesting thing is that it correctly recognizes the doc that corresponds to the model. The explore within the model file is correct and has the correct documentation.

    Not sure if I can be of any more help but let me know if there is anything!

    bug 
    opened by sisu-callum 19
  • ValueError: Failed to parse dbt manifest.json

    ValueError: Failed to parse dbt manifest.json

    Hey! I'm trying to run this package and hitting errors right after installation. I pip installed dbt2looker, ran the following in the root of my dbt project.

    dbt docs generate
    dbt2looker
    

    This gives me the following error:

    Traceback (most recent call last): File "/Users/josh/.pyenv/versions/3.10.0/bin/dbt2looker", line 8, in sys.exit(run()) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 108, in run raw_manifest = get_manifest(prefix=args.target_dir) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 33, in get_manifest parser.validate_manifest(raw_manifest) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/parser.py", line 20, in validate_manifest raise ValueError("Failed to parse dbt manifest.json") ValueError: Failed to parse dbt manifest.json

    This is preceded by a whole mess of error messages like such:

    21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['analysis'] 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['test']

    Any idea what might be going wrong here? Happy to provide more detail. Thank you!

    opened by jdavid459 6
  • DBT version 1.0

    DBT version 1.0

    Hi,

    Is this library supporting DBT version 1.0 and forward? I can't get it to run at all. There's a lot of errors when checking the schema of the manifest.json file.

    / Andrea

    opened by AndreasTA-AW 3
  • Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    When running

    dbt2looker --tag test
    

    I get

    $ dbt2looker --tag test
    19:31:20 WARNING Multiple manifest.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple catalog.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple dbt_project.yml files found in path ./ this can lead to unexpected behaviour
    19:31:20 INFO   Generated 0 lookml views in ./lookml/views
    19:31:20 INFO   Generated 1 lookml model in ./lookml
    19:31:20 INFO   Success
    

    and no lookml files are generated.

    I assume this is because I have multiple dbt packages installed? Is there a way to get around this? Otherwise, a feature request would be the ability to specify which files should be used - perhaps in a separate dbt2looker.yml settings file.

    enhancement 
    opened by arniwesth 3
  • Support Bigquery BIGNUMERIC datatype

    Support Bigquery BIGNUMERIC datatype

    Previously, dbt2looker would not create dimension for field with data type BIGNUMERIC since Looker didn't support converting BIGNUMERIC. So when we ran dbt2looker in CLI there is a warning WARNING Column type BIGNUMERIC not supported for conversion from bigquery to looker. No dimension will be created. However, as of November 2021, Looker has officially supported BigQuery BIGNUMBERIC (link). Please help to add this. Thank you,

    opened by IL-Jerry 2
  • Adding Filters to Meta Looker Config in schema.yml

    Adding Filters to Meta Looker Config in schema.yml

    Use Case: Given that programatic creation of all LookML files is the goal, there are a couple features that could potentially be added in order to give people more flexibility in measure creation. The first one I could think of was filters. Individuals would use filters to calculate measures like Active Users (ex: count_distinct user ids where some sort of flag is true).

    The following code is my admitted techno-babble as I don't fully understand pydantic and my python is almost exclusively pandas based.

    def lookml_dimensions_from_model(model: models.DbtModel, adapter_type: models.SupportedDbtAdapters):
        return [
            {
                'name': column.name,
                'type': map_adapter_type_to_looker(adapter_type, column.data_type),
                'sql': f'${{TABLE}}.{column.name}',
                'description': column.description
                'filter':[measure.name: f'measure.value']
    
            }
            for column in model.columns.values()
            for filter in column.meta.looker.filters
            if map_adapter_type_to_looker(adapter_type, column.data_type) in looker_scalar_types
        ]
    
    
    def lookml_measures_from_model(model: models.DbtModel):
        return [
            {
                'name': measure.name,
                'type': measure.type.value,
                'sql': f'${{TABLE}}.{column.name}',
                'description': f'{measure.type.value.capitalize()} of {column.description}',
                **'filter':[measure.name: f'measure.value']**
            }
            for column in model.columns.values()
            for measure in column.meta.looker.measures
            **for filter in column.meta.looker.filters**
    
        ]
    

    Pretty obvious I would imagine that my Python skills are lacking/non-existent (and I have no idea if this would actually work) but this idea would add more functionality for those who want to create more dynamic measures. Here is a bare-bones idea of how it could be configured in dbt

    image

    Then the output would look something like.

      measure: Page views {
        type: count
        sql: ${TABLE}.relevant_field ;;
        description: "Count of something."
        filter: [the_name_of_defined_column: value_of_defined_column]
      }
    
    enhancement 
    opened by sisu-callum 2
  • Incompatible packages when using snowflake

    Incompatible packages when using snowflake

    This error comes up when using with snowflake: https://github.com/snowflakedb/snowflake-connector-python/issues/1206

    it is remedied by the simple line pip install typing-extensions>=4.3.0 , but dbt2looker depends on < 4.0.0.

    dbt2looker 0.9.2 requires typing-extensions<4.0.0,>=3.10.0, but you have typing-extensions 4.3.0 which is incompatible.
    
    opened by owlas 1
  • Allowing skipping dbt manifest validation.

    Allowing skipping dbt manifest validation.

    Some users use the manifest heavily in order to enhance their work with dbt. IMHO, in such cases, the Looker library should not enforce any schema validations and it is the users' responsibility to keep the Looker generation not broken.

    opened by cgrosman 1
  • Redshift type conversions missing

    Redshift type conversions missing

    Redshift has missing type conversions:

    10:07:17 WARNING Column type timestamp without time zone not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type boolean not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type double precision not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type character varying(108) not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 DEBUG  Created view from model dim_appointment with 0 measures, 0 dimensions
    
    bug 
    opened by owlas 1
  • Join models in explores

    Join models in explores

    Expose config for defining explores with joined models.

    Ideally this would live in a dbt exposure but it's currently missing meta information.

    Add to models for now?

    enhancement 
    opened by owlas 1
  • feat: remove strict manifest validation

    feat: remove strict manifest validation

    Closes #72 Closes #37

    We have some validation already with typing and the dbt manifest keeps changing. I think json-schema is causing more problems that it is solving. If we get weird errors, we can introduce some more relaxed validation.

    opened by owlas 0
  • Support group_labels in yml for dimensions

    Support group_labels in yml for dimensions

    https://github.com/lightdash/dbt2looker/blob/bb8f5b485ec541e2b1be15363ac3c7f8f19d030d/dbt2looker/models.py#L99

    measures seem to have this but not dimensions. probably all/most properties in available in https://docs.lightdash.com/references/dimensions/ should be represented here -- is this something lightdash is willing to maintain or would you want a contribution? @TuringLovesDeathMetal / @owlas - i figure full support for lightdash properties that can map to looker should be, in order to maximize the value of this utility for enabling looker customers to uncouple themselves from looker.

    opened by mike-weinberg 1
  • Issue when parsing dbt models

    Issue when parsing dbt models

    Hey folks!

    I've just run 'dbt2looker' in my local dbt repo folder, and I receive the following error:

    ❯ dbt2looker
    12:11:54 ERROR  Cannot parse model with id: "model.smallpdf.brz_exchange_rates" - is the model file empty?
    Failed
    

    The model file itself (pictured below) is not empty, therefore I am not sure what the issue with parsing this model dbt2looker appears to have. It is not materialised as a table or view, it is utilised by dbt as ephemeral - is that of importance when parsing files in the project? I've also tried running dbt2looker on a limited subset of dbt models via a tag, the same error appears. Any help is greatly appreciated!

    Screenshot 2022-06-20 at 12 12 22

    Other details:

    • on dbt version dbt 1.0.0
    • using dbt-redshift adapter [email protected]
    • let me know if anything else is of importance!
    opened by lewisosborne 8
  • Support model level measures

    Support model level measures

    Motivation

    We technically implement a measure with multiple columns under a column meta. But, it would be more natural to implement such measures as model-level.

    models:
      - name: ubie_jp_lake__dm_medico__hourly_score_for_nps
        description: |
          {{ doc("ubie_jp_lake__dm_medico__hourly_score_for_nps") }}
        meta:
          measures:
            total_x_y_z:
              type: number
              description: 'Summation of total x, total y and total z'
              sql: '${total_x} + ${total_y} + ${total_z}'
    
    
    opened by yu-iskw 0
  • Lookml files should merge with existing views

    Lookml files should merge with existing views

    If I already have a view file, I'd like to merge in any new columns I've added in dbt.

    For example, if I have a description in dbt but not in looker, I'd like to add it

    If looker already has a description, it should be left alone

    Thread in dbt slack: https://getdbt.slack.com/archives/C01DPMVM2LU/p1650353949839609?thread_ts=1649968691.671229&cid=C01DPMVM2LU

    opened by owlas 0
  • Non-empty models cannot be parsed and are reported as empty

    Non-empty models cannot be parsed and are reported as empty

    As of version 0.9.2, dbt2looker will not run for us anymore. v0.7.0 does run successfully. The error returned by 0.9.2 is 'Cannot parse model with id: "%s" - is the model file empty?'. However, the model that this is returned for is not empty. Based on the code, it seems like the attribute 'name' is missing, but inspecting the manifest.json file shows that there is actually a name for this model. I have no idea why the system reports these models as empty. The manifest.json object for one of the offending models is pasted below.

    Reverting to v0.9.0 (which does not yet have this error message) just leads to dbt2looker crashing without any information. Reverting to 0.7.0 fixes the problem. This issue effectively locks us (and likely others) into using an old version of dbt2looker

    "model.zivver_dwh.crm_account_became_customer_dates":
            {
                "raw_sql": "WITH sfdc_accounts AS (\r\n\r\n    SELECT * FROM {{ ref('stg_sfdc_accounts') }}\r\n\r\n), crm_opportunities AS (\r\n\r\n    SELECT * FROM {{ ref('crm_opportunities') }}\r\n\r\n), crm_account_lifecycle_stage_changes_into_customer_observed AS (\r\n\r\n    SELECT\r\n        *\r\n    FROM {{ ref('crm_account_lifecycle_stage_changes_observed') }}\r\n    WHERE\r\n        new_stage = 'CUSTOMER'\r\n\r\n), became_customer_dates_from_opportunities AS (\r\n\r\n    SELECT\r\n        crm_account_id AS sfdc_account_id,\r\n\r\n        -- An account might have multiple opportunities. The account became customer when the first one was closed won.\r\n        MIN(closed_at) AS became_customer_at\r\n    FROM crm_opportunities\r\n    WHERE\r\n        opportunity_stage = 'CLOSED_WON'\r\n    GROUP BY\r\n        1\r\n\r\n), became_customer_dates_observed AS (\r\n\r\n    -- Some accounts might not have closed won opportunities, but still be a customer. Examples would be Connect4Care\r\n    -- customers, which have a single opportunity which applies to multiple accounts. If an account is manually set\r\n    -- to customer, this should also count as a customer.\r\n    --\r\n    -- We try to get the date at which they became a customer from the property history. Since that wasn't on from\r\n    -- the beginning, we conservatively default to either the creation date of the account or the history tracking\r\n    -- start date, whichever was earlier. Please note that this case should be exceedingly rare.\r\n    SELECT\r\n        sfdc_accounts.sfdc_account_id,\r\n        CASE\r\n            WHEN {{ var('date:sfdc:account_history_tracking:start_date') }} <= sfdc_accounts.created_at\r\n                THEN sfdc_accounts.created_at\r\n            ELSE {{ var('date:sfdc:account_history_tracking:start_date') }}\r\n        END AS default_became_customer_date,\r\n\r\n        COALESCE(\r\n            MIN(crm_account_lifecycle_stage_changes_into_customer_observed.new_stage_entered_at),\r\n            default_became_customer_date\r\n        ) AS became_customer_at\r\n\r\n    FROM sfdc_accounts\r\n    LEFT JOIN crm_account_lifecycle_stage_changes_into_customer_observed\r\n        ON sfdc_accounts.sfdc_account_id = crm_account_lifecycle_stage_changes_into_customer_observed.sfdc_account_id\r\n    WHERE\r\n        sfdc_accounts.lifecycle_stage = 'CUSTOMER'\r\n    GROUP BY\r\n        1,\r\n        2\r\n\r\n)\r\nSELECT\r\n    COALESCE(became_customer_dates_from_opportunities.sfdc_account_id,\r\n        became_customer_dates_observed.sfdc_account_id) AS sfdc_account_id,\r\n    COALESCE(became_customer_dates_from_opportunities.became_customer_at,\r\n        became_customer_dates_observed.became_customer_at) AS became_customer_at\r\nFROM became_customer_dates_from_opportunities\r\nFULL OUTER JOIN became_customer_dates_observed\r\n    ON became_customer_dates_from_opportunities.sfdc_account_id = became_customer_dates_observed.sfdc_account_id",
                "resource_type": "model",
                "depends_on":
                {
                    "macros":
                    [
                        "macro.zivver_dwh.ref",
                        "macro.zivver_dwh.audit_model_deployment_started",
                        "macro.zivver_dwh.audit_model_deployment_completed",
                        "macro.zivver_dwh.grant_read_rights_to_role"
                    ],
                    "nodes":
                    [
                        "model.zivver_dwh.stg_sfdc_accounts",
                        "model.zivver_dwh.crm_opportunities",
                        "model.zivver_dwh.crm_account_lifecycle_stage_changes_observed"
                    ]
                },
                "config":
                {
                    "enabled": true,
                    "materialized": "ephemeral",
                    "persist_docs":
                    {},
                    "vars":
                    {},
                    "quoting":
                    {},
                    "column_types":
                    {},
                    "alias": null,
                    "schema": "bl",
                    "database": null,
                    "tags":
                    [
                        "business_layer",
                        "commercial"
                    ],
                    "full_refresh": null,
                    "crm_record_types": null,
                    "post-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_completed() }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('data_engineer', ['all']) }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('analyst', ['all']) }}",
                            "transaction": true,
                            "index": null
                        }
                    ],
                    "pre-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_started() }}",
                            "transaction": true,
                            "index": null
                        }
                    ]
                },
                "database": "analytics",
                "schema": "bl",
                "fqn":
                [
                    "zivver_dwh",
                    "business_layer",
                    "commercial",
                    "crm_account_lifecycle_stage_changes",
                    "intermediates",
                    "crm_account_became_customer_dates",
                    "crm_account_became_customer_dates"
                ],
                "unique_id": "model.zivver_dwh.crm_account_became_customer_dates",
                "package_name": "zivver_dwh",
                "root_path": "C:\\Users\\tjebbe.bodewes\\Documents\\zivver-dwh\\dwh\\transformations",
                "path": "business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "original_file_path": "models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "name": "crm_account_became_customer_dates",
                "alias": "crm_account_became_customer_dates",
                "checksum":
                {
                    "name": "sha256",
                    "checksum": "a037b5681219d90f8bf8d81641d3587f899501358664b8ec77168901b3e1808b"
                },
                "tags":
                [
                    "business_layer",
                    "commercial"
                ],
                "refs":
                [
                    [
                        "stg_sfdc_accounts"
                    ],
                    [
                        "crm_opportunities"
                    ],
                    [
                        "crm_account_lifecycle_stage_changes_observed"
                    ]
                ],
                "sources":
                [],
                "description": "",
                "columns":
                {
                    "sfdc_account_id":
                    {
                        "name": "sfdc_account_id",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    },
                    "became_customer_at":
                    {
                        "name": "became_customer_at",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    }
                },
                "meta":
                {},
                "docs":
                {
                    "show": true
                },
                "patch_path": "zivver_dwh://models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.yml",
                "compiled_path": null,
                "build_path": null,
                "deferred": false,
                "unrendered_config":
                {
                    "pre-hook":
                    [
                        "{{ audit_model_deployment_started() }}"
                    ],
                    "post-hook":
                    [
                        "{{ grant_read_rights_to_role('analyst', ['all']) }}"
                    ],
                    "tags":
                    [
                        "commercial"
                    ],
                    "materialized": "ephemeral",
                    "schema": "bl",
                    "crm_record_types": null
                },
                "created_at": 1637233875
            }
    
    opened by Tbodewes 2
Releases(v0.11.0)
  • v0.11.0(Dec 1, 2022)

    Added

    • support label and hidden fields (#49)
    • support non-aggregate measures (#41)
    • support bytes and bignumeric for bigquery (#75)
    • support for custom connection name on the cli (#78)

    Changed

    • updated dependencies (#74)

    Fixed

    • Types maps for redshift (#76)

    Removed

    • Strict manifest validation (#77)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Oct 11, 2021)

  • v0.9.1(Oct 7, 2021)

    Fixed

    • Fixed bug where dbt2looker would crash if a dbt project contained an empty model

    Changed

    • When filtering models by tag, models that have no tag property will be ignored
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 7, 2021)

    Added

    • Support for spark adapter (@chaimt)

    Changed

    • Updated with support for dbt2looker (@chaimt)
    • Lookml views now populate their "sql_table_name" using the dbt relation name
    Source code(tar.gz)
    Source code(zip)
  • v0.8.2(Sep 22, 2021)

    Changed

    • Measures with missing descriptions fall back to coloumn descriptions. If there is no column description it falls back to "{measure_type} of {column_name}".
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Sep 22, 2021)

    Added

    • Dimensions have an enabled flag that can be used to switch off generated dimensions for certain columns with enabled: false
    • Measures have been aliased with the following: measures,measure,metrics,metric

    Changed

    • Updated dependencies
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Sep 9, 2021)

    Changed

    • Command line interface changed argument from --target to --target-dir

    Added

    • Added the --project-dir flag to the command line interface to change the search directory for dbt_project.yml
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Sep 9, 2021)

  • v0.7.2(Sep 9, 2021)

  • v0.7.1(Aug 27, 2021)

    Added

    • Use dbt2looker --output-dir /path/to/dir to customise the output directory of the generated lookml files

    Fixed

    • Fixed error with reporting json validation errors
    • Fixed error in join syntax in example .yml file
    • Fixed development environment for python3.7 users
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 18, 2021)

  • v0.6.2(Apr 18, 2021)

  • v0.6.1(Apr 17, 2021)

  • v0.6.0(Apr 17, 2021)

Owner
lightdash
lightdash
ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.

ForecastGA is a tool that combines a couple of popular libraries, Atspy and googleanalytics, with a few enhancements.

JR Oakes 36 Jan 3, 2023
Hidden Markov Models in Python, with scikit-learn like API

hmmlearn hmmlearn is a set of algorithms for unsupervised learning and inference of Hidden Markov Models. For supervised learning learning of HMMs and

null 2.7k Jan 3, 2023
A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 1.6k Dec 29, 2022
Describing statistical models in Python using symbolic formulas

Patsy is a Python library for describing statistical models (especially linear models, or models that have a linear component) and building design mat

Python for Data 866 Dec 16, 2022
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 9, 2023
A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow

ZhuSuan is a Python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and

Tsinghua Machine Learning Group 2.2k Dec 28, 2022
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
A Python package for the mathematical modeling of infectious diseases via compartmental models

A Python package for the mathematical modeling of infectious diseases via compartmental models. Originally designed for epidemiologists, epispot can be adapted for almost any type of modeling scenario.

epispot 12 Dec 28, 2022
Collections of pydantic models

pydantic-collections The pydantic-collections package provides BaseCollectionModel class that allows you to manipulate collections of pydantic models

Roman Snegirev 20 Dec 26, 2022
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

null 2 Dec 13, 2022
Fitting thermodynamic models with pycalphad

ESPEI ESPEI, or Extensible Self-optimizing Phase Equilibria Infrastructure, is a tool for thermodynamic database development within the CALPHAD method

Phases Research Lab 42 Sep 12, 2022
vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models, such as: T-test: verify if mean of distribution i

RAFAEL RODRIGUES 5 Jan 3, 2023
A model checker for verifying properties in epistemic models

Epistemic Model Checker This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralisti

Thomas Träff 2 Dec 22, 2021
Fit models to your data in Python with Sherpa.

Table of Contents Sherpa License How To Install Sherpa Using Anaconda Using pip Building from source History Release History Sherpa Sherpa is a modeli

null 134 Jan 7, 2023
dbt-subdocs is a python CLI you can used to generate a dbt-docs for a subset of your dbt project

dbt-subdocs dbt-subdocs is a python CLI you can used to generate a dbt-docs for a subset of your dbt project ?? Description This project is useful if

Jambe 6 Jan 3, 2023
Generate Views, Serializers, and Urls for your Django Rest Framework application

DRF Generators Writing APIs can be boring and repetitive work. Don't write another CRUDdy view in Django Rest Framework. With DRF Generators, one simp

Tobin Brown 332 Dec 17, 2022
:fishing_pole_and_fish: List of `pre-commit` hooks to ensure the quality of your `dbt` projects.

pre-commit-dbt List of pre-commit hooks to ensure the quality of your dbt projects. BETA NOTICE: This tool is still BETA and may have some bugs, so pl

Offbi 262 Nov 25, 2022
Singer Tap for dbt Artifacts built with the Meltano SDK

tap-dbt-artifacts tap-dbt-artifacts is a Singer tap for dbtArtifacts. Built with the Meltano SDK for Singer Taps.

Prratek Ramchandani 9 Nov 25, 2022
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 6, 2023
dbt adapter for Firebolt

dbt-firebolt dbt adapter for Firebolt dbt-firebolt supports dbt 0.21 and newer Installation First, download the JDBC driver and place it wherever you'

null 23 Dec 14, 2022