Notebook and code to synthesize complex and highly dimensional datasets using Gretel APIs.

Overview

Gretel Trainer

This code is designed to help users successfully train synthetic models on complex datasets with high row and column counts. The code works by intelligently dividing a dataset into a set of smaller datasets of correlated columns that can be parallelized and then joined together.

Get Started

Running the notebook

  1. Launch the Notebook in Google Colab or your preferred environment.
  2. Add your dataset and Gretel API key to the notebook.
  3. Generate synthetic data!

NOTE: Either delete the existing or choose a new cache file name if you are starting a dataset run from scratch.

TODOs / Roadmap

  • Enable additional sampling from from trained models.
  • Detect and label encode random UIDs (preprocessing).
Comments
  • Benchmark route Amplify models through Trainer

    Benchmark route Amplify models through Trainer

    Top level change

    Now that Trainer has a GretelAmplify model, Benchmark uses Trainer for Amplify runs instead of the SDK.

    Refactor

    I refactored Benchmark's Gretel models and executors with the goal of centralizing and thus making it simpler to understand:

    • which model types use Trainer (opt-in) vs. use the SDK
    • the "compatibility requirements" for different models (currently: LSTM <= 150 columns, GPTX == 1 column)

    These had been spread across a few different places (compare.py determined Trainer/SDK, gretel/sdk.py had GPTX compatibility, gretel/trainer.py had LSTM compatibility), but now it can all be found in gretel/models.py.

    At first glance it would seem compatibility requirements could be defined on specific model subclasses to make things more polymorphic. However, Benchmark's Gretel model classes are really just friendly wrappers around specific model configurations (from the blueprints repo) and do not represent all possible instances of that model type running through Benchmark. Instead, we instruct users subclass the generic GretelModel base class when they want to provide their own specific Gretel configuration. There are two reasons for this:

    1. It's a simpler instruction (always subclass this one thing)
    2. It enables us to include model types that are not yet "first class supported," such as DGAN (which we can't support in the same way we do models like Amplify/LSTM/etc. because DGAN's config includes required fields that are specifically coupled to the data source—there is no "one size fits all" blueprint).

    Small fixes

    • fix the model_slug value for Trainer's GretelACTGAN model
      • :warning: should this be changed to a list ["actgan", "ctgan"] for a little while for a smoother transition/deprecation experience??
    • zero-index custom model runs' run-identifier to match gretel model runs (which were themselves fixed to match project names here)
    opened by mikeknep 2
  • Lift gretel model compatibility to separate module

    Lift gretel model compatibility to separate module

    What's here

    Make it easier to find the "compatibility rules" for models by lifting the logic to its own module.

    Why not add this logic to the specific model classes? Wouldn't that be more polymorphic?

    The model classes (GretelLSTM, GretelCTGAN, etc.) are wrappers around specific configurations from the blueprints repo. They do not represent every possible configuration of that model type. If a user wants to run a customized LSTM config, for example, they subclass GretelModel, not GretelLSTM:

    class MyLstm(GretelModel):
        config = "/path/to/my_lstm.yml"
    

    Note: they could subclass GretelLSTM, but 1) it's easier to tell people to just subclass GretelModel regardless of model type, and/because 2) this ultimately treats the model configuration as the source of truth.

    If someone mistakenly created a custom Gretel model like this...

    class MyGptX(GretelGPTX):
        config = "/path/to/my_amplify.yml"
    

    ...Benchmark will treat this as an Amplify model, because basically all it does with the class instance is grab the config attribute (and the name—the results output will show the name as MyGptX.)

    opened by mikeknep 1
  • Lr/artifact manifest

    Lr/artifact manifest

    Added logic for config selection and updated dictionary key to access manifest per latest internal changes.

    Note that high-dimensionality-high-record is non-existent at the moment, as is the manifest endpoint :)

    Items yet to be addressed:

    • turn off partitions for non-LSTM models
    opened by lipikaramaswamy 1
  • Add param to pass custom base configuration

    Add param to pass custom base configuration

    • Prefer config if present, otherwise use the model_type's default config.
    • This does open the door a little wider to setting an invalid config that won't be known to be bad until attempting to train. That door was already slightly ajar in that one could use model_params to set keys to invalid values.
    • Not included here, but a thought: we could validate model_type earlier (even as the very first step of __init__) to fail fast, specifically before even creating a project.
    opened by mikeknep 1
  • Remove no-op elif case from runner

    Remove no-op elif case from runner

    Particularly given that we now have a third model (Amplify) supported in Trainer, we can remove this no-op elif clause so that the runner only has special logic for / awareness of LSTM (expand up in the diff for context).

    opened by mikeknep 0
  • Switch CTGAN usages to ACTGAN.

    Switch CTGAN usages to ACTGAN.

    ACTGAN is the successor of CTGAN.

    Note (1): this change is backward compatible, as all of the parameters that CTGAN supported are supported by ACTGAN as well.

    Note (2): any previously trained CTGAN models will be still usable, i.e. it will be possible to generate new records using old CTGAN models.

    opened by pimlock 0
  • Fix off-by-one difference between project name and run ID

    Fix off-by-one difference between project name and run ID

    Quick fix so that benchmark's internal run identifier lines up with the project name in Gretel Cloud. We'll eventually have a more user-friendly and stable interface to access detailed run information, but until we figure out how exactly we want that to look and do it, this should make things a little more friendly for those willing to dive into the internals: the models from project benchmark-{timestamp}-3 will correspond to comparison.results_dict["gretel-3"] (instead of "gretel-4")

    Note: I considered just using the full project name as the identifier instead of gretel-{index}, but we don't have an equivalent to project names for user custom model runs, so I figure the current [gretel|custom]-{index} approach is still best for now.

    opened by mikeknep 0
  • Configure session before starting Benchmark comparison

    Configure session before starting Benchmark comparison

    Current behavior

    When running in an environment where no Gretel credentials can be found (e.g. Colab), when Benchmark kicks off a comparison the background threads instantiating Trainer instances will prompt for an API key. This is problematic for multiple reasons, all (I believe) due to it running in multiple background threads: it prompts multiple times, doesn't accept input and/or cache properly, and ultimately crashes.

    This fix

    Benchmark itself now checks for a configured session before kicking off any real work. It prompts (api_key="prompt") if no credentials are found, validates (validate=True) the supplied API key, and caches (cache="yes") it for all the runs it manages. The configure_session calls that happen when instantiating Trainer effectively "pass through." I've tested this by installing trainer from this branch in Colab and it is now working as expected.

    opened by mikeknep 0
  • Include dataset name in trainer uploads.

    Include dataset name in trainer uploads.

    Add original file name to data sources uploaded as part of trainer projects. This helps disambiguate the data sources from multiple trainer runs where previously they were always named trainer_0.csv, trainer_1.csv, etc.

    Also fixes StrategyRunner to not silently swallow all ApiExceptions when submitting a job, so errors not associated with max job limit are still thrown and surfaced to the user.

    opened by kboyd 0
  • Auto-determine best model from training data

    Auto-determine best model from training data

    Rather than create a GretelAuto model class that would need to override or work around several _BaseConfig details (validation, max/limit values, etc.), my goal here is to establish the convention that model type is optional and if you don't specify one when instantiating the Trainer, you're OK with us choosing for you. This is a change from the current behavior (optional but default to LSTM). In this case, we defer setting the trainer instance's self.model_type until such time as we can determine the best model to use: namely, at train time when a dataset has been provided.

    I'm a little unclear on the load (from cache) workflow, which in this branch's implementation would set the StrategyRunner's model_config to None. I think this is OK because the only methods referencing that value are part of training (train_all_partitions => train_next_partition => train_partition), and that workflow is only kicked off by the Trainer's train method, which will load in data and use it to determine and set a concrete model.

    I've also added an optional delimiter parameter to train to help support files with non-comma delimiters.

    opened by mikeknep 0
  • Get average sqs score from across partitions

    Get average sqs score from across partitions

    A few ways we could slice and dice this; I figure there may be additional SQS info we want from the run in the future so I decided to expose the entire List[dict] from the runner, and let the trainer pluck out and calculate the first such aggregate, user-friendly data. I'm open to pushing more of this down to the runner and/or transforming the SQS dictionaries into first-class types (likely dataclasses) if anyone has a strong opinion or thinks it'd be useful.

    opened by mikeknep 0
  • Use artifact manifest for determine_best_model.

    Use artifact manifest for determine_best_model.

    Not fully tested. Waiting for new backend API to be available.

    Should revisit retry logic if we can reliably distinguish between a pending manifest (still being generated) and some other error. Or if retrying is included in the gretel_client interface.

    opened by kboyd 1
Releases(v0.5.0)
  • v0.5.0(Nov 18, 2022)

    What's Changed

    • GretelCTGAN has been completely removed, fully replaced by its successor, GretelACTGAN
    • GretelACTGAN uses the new tabular-actgan config by default
    • Benchmark now routes Amplify models through Trainer rather than the SDK
    • Bug fix: helper to properly configure Gretel session before starting Benchmark comparison when unset
    • Bug fix: zero-index Benchmark run ID (internal) to fix off-by-one difference with project name

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Nov 2, 2022)

    What's Changed

    • Add pip install command and Colab disclaimer to Benchmark notebook by @mikeknep in https://github.com/gretelai/trainer/pull/22
    • Include dataset name in trainer uploads. by @kboyd in https://github.com/gretelai/trainer/pull/21
    • Docs improvements by @MasonEgger (https://github.com/gretelai/trainer/pull/23 https://github.com/gretelai/trainer/pull/24 https://github.com/gretelai/trainer/pull/28 https://github.com/gretelai/trainer/pull/26)
    • Add support for Gretel Amplify by @pimlock in https://github.com/gretelai/trainer/pull/29

    New Contributors

    • @kboyd made their first contribution in https://github.com/gretelai/trainer/pull/21
    • @MasonEgger made their first contribution in https://github.com/gretelai/trainer/pull/23
    • @pimlock made their first contribution in https://github.com/gretelai/trainer/pull/29

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 6, 2022)

    What's Changed

    • Initial release of new Benchmark module :rocket: by @mikeknep in https://github.com/gretelai/trainer/pull/19
    • Create simple-conditional-generation.ipynb :notebook: by @zredlined in https://github.com/gretelai/trainer/pull/18

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 30, 2022)

  • v0.2.3(Aug 24, 2022)

    What's Changed

    • The trainer now chooses the best model configuration based on input training data when model_type is not specified in advance at Trainer instantiation (previously defaulted to GretelLSTM)
    • train accepts an optional delimiter argument (defaults to comma when unspecified)
    • Input training data is divided more equally across row partitions
    • LSTM models generate a consistent number of records (5000) during data training (previously matched size of input training data)
    • Fixed trainer generate to synthesize the correct number of records when multiple row partitions are used
    • Fixed trainer get_sqs_score method

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.2...v0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Aug 11, 2022)

    What's Changed

    • Update default model config by @zredlined in https://github.com/gretelai/trainer/pull/10
    • Remove project delete instruction by @drew in https://github.com/gretelai/trainer/pull/11
    • CTGAN and conditional data generation by @zredlined in https://github.com/gretelai/trainer/pull/12
    • Get average sqs score from across partitions by @mikeknep in https://github.com/gretelai/trainer/pull/14

    Full Changelog: https://github.com/gretelai/trainer/compare/v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 16, 2022)

  • v0.2.0(Jun 10, 2022)

  • v0.1.0(Jun 10, 2022)

Owner
Gretel.ai
Gretel.ai Open Source Projects and Tools
Gretel.ai
The implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets.

Joint t-sne This is the implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets. abstract: We present Jo

IDEAS Lab 7 Dec 18, 2022
Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers.

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

null 2 Oct 14, 2021
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022
Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression.

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Not an official Google product. Me

Google Research 27 Dec 12, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 2, 2023
Torch-based tool for quantizing high-dimensional vectors using additive codebooks

Trainable multi-codebook quantization This repository implements a utility for use with PyTorch, and ideally GPUs, for training an efficient quantizer

Daniel Povey 41 Jan 7, 2023
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Neural-fractal - Create Fractals Using Complex-Valued Neural Networks!

Neural Fractal Create Fractals Using Complex-Valued Neural Networks! Home Page Features Define Dynamical Systems Using Complex-Valued Neural Networks

Amirabbas Asadi 10 Dec 17, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Dec 31, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Jan 6, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.3k Feb 12, 2021
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 8, 2023
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

null 3k Jan 2, 2023
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021

Disentangled Cycle Consistency for Highly-realistic Virtual Try-On, CVPR 2021 [WIP] The code for CVPR 2021 paper 'Disentangled Cycle Consistency for H

ChongjianGE 94 Dec 11, 2022
Paddle implementation for "Highly Efficient Knowledge Graph Embedding Learning with Closed-Form Orthogonal Procrustes Analysis" (NAACL 2021)

ProcrustEs-KGE Paddle implementation for Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis ?? A more detailed re

Lincedo Lab 4 Jun 9, 2021