Tangram makes it easy for programmers to train, deploy, and monitor machine learning models.

Overview

Tangram

Website | Discord

Tangram makes it easy for programmers to train, deploy, and monitor machine learning models.

  • Run tangram train to train a model from a CSV file on the command line.
  • Make predictions with libraries for Elixir, Go, JavaScript, PHP, Python, Ruby, and Rust.
  • Run tangram app to learn more about your models and monitor them in production.

Install

Install the tangram CLI

Train

Train a machine learning model by running tangram train with the path to a CSV file and the name of the column you want to predict.

$ tangram train --file heart_disease.csv --target diagnosis --output heart_disease.tangram
โœ… Loading data.
โœ… Computing features.
๐Ÿš‚ Training model 1 of 8.
[==========================================>                         ]

The CLI automatically transforms your data into features, trains a number of linear and gradient boosted decision tree models to predict the target column, and writes the best model to a .tangram file. If you want more control, you can provide a config file.

Predict

Make predictions with libraries for Elixir, Go, JavaScript, PHP, Python, Ruby, and Rust.

let tangram = require("@tangramdotdev/tangram")

let model = new tangram.Model("./heart_disease.tangram")

let input = {
	age: 63,
	gender: "male",
	// ...
}

let output = model.predict(input)
console.log(output)
{ className: 'Negative', probability: 0.9381780624389648 }

Inspect

Run tangram app, open your browser to http://localhost:8080, and upload the model you trained.

  • View stats and metrics.
  • Tune your model to get the best performance.
  • Make example predictions and get detailed explanations.

report

tune

Monitor

Once your model is deployed, make sure that it performs as well in production as it did in training. Opt in to logging by calling logPrediction.

// Log the prediction.
model.logPrediction({
	identifier: "6c955d4f-be61-4ca7-bba9-8fe32d03f801",
	input,
	options,
	output,
})

Later on, if you find out the true value for a prediction, call logTrueValue.

// Later on, if we get an official diagnosis for the patient, log the true value.
model.logTrueValue({
	identifier: "6c955d4f-be61-4ca7-bba9-8fe32d03f801",
	trueValue: "Positive",
})

Now you can:

  • Look up any prediction by its identifier and get a detailed explanation.
  • Get alerts if your data drifts or metrics dip.
  • Track production accuracy, precision, recall, etc.

predictions

drift

metrics

Building from Source

This repository is a Cargo workspace, and does not require anything other than the latest stable Rust toolchain to get started with.

  1. Install Rust on Linux, macOS, or Windows.
  2. Clone this repo and cd into it.
  3. Run cargo run to run a debug build of the CLI.

If you are working on the app, run scripts/app/dev. This rebuilds and reruns the CLI with the app subcommand as you make changes.

To install all dependencies necessary to work on the language libraries and build releases, install Nix with flake support, then run nix develop or set up direnv.

If you want to submit a pull request, please run scripts/fmt and scripts/check at the root of the repository to confirm that your changes are formatted correctly and do not have any errors.

License

All of this repository is MIT licensed, except for the crates/app directory, which is source available and free to use for testing, but requires a paid license to use in production. Send us an email at [email protected] if you are interested in a license.

Comments
  • Maximum train dataset size.

    Maximum train dataset size.

    Is there any limit on maximum train dataset size? I feed tangram with .csv file having almost 7800 lines with valid data, but only 6640 (including tests) are used by tangram for train.

    bug 
    opened by m-kru 14
  • Crash on classification with config file

    Crash on classification with config file

    โœ… Inferring train table columns. 2s
    โœ… Loading train table. 2s
    โœ… Loading test table. 5s
    โœ… Shuffling. 0s 628ms
    โœ… Computing train stats. 9s
    โœ… Computing test stats. 27s
    โœ… Finalizing stats. 16s
    ๐Ÿ Computing baseline metrics. 212389 / 230150 92% 0s 15ms elapsed 0ms remaining
    [=======================================================================>      ]
    [Thread 0x7ffff7c7e640 (LWP 419555) exited]
    thread panicked while panicking. aborting.
    
    Thread 1 "tangram" received signal SIGILL, Illegal instruction.
    
    #0  std::panicking::rust_panic_with_hook () at library/std/src/panicking.rs:621
    #1  0x00005555572d34a0 in std::panicking::begin_panic_handler::{closure#0} () at library/std/src/panicking.rs:502
    #2  0x00005555572d1944 in std::sys_common::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::{closure#0}, !> () at library/std/src/sys_common/backtrace.rs:139
    #3  0x00005555572d3409 in std::panicking::begin_panic_handler () at library/std/src/panicking.rs:498
    #4  0x0000555555893a51 in core::panicking::panic_fmt () at library/core/src/panicking.rs:107
    #5  0x0000555555893b43 in core::result::unwrap_failed () at library/core/src/result.rs:1613
    #6  0x00005555558d6764 in core::result::Result::unwrap<(), std::sync::mpsc::SendError<core::option::Option<tangram_core::progress::ProgressEvent>>> () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/result.rs:1295
    #7  tangram::train::{impl#1}::drop () at crates/cli/train.rs:169
    #8  0x00005555559319a9 in core::ptr::drop_in_place<tangram::train::ProgressThread> () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ptr/mod.rs:188
    #9  core::ptr::drop_in_place<core::option::Option<tangram::train::ProgressThread>> () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ptr/mod.rs:188
    #10 0x000055555593a651 in tangram::train::train::{closure#1} () at crates/cli/train.rs:117
    #11 0x00005555558d5c27 in std::panicking::try::do_call<tangram::train::train::{closure#1}, core::result::Result<(tangram_core::model::Model, std::path::PathBuf), anyhow::Error>> () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406
    #12 std::panicking::try<core::result::Result<(tangram_core::model::Model, std::path::PathBuf), anyhow::Error>, tangram::train::train::{closure#1}> () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370
    #13 std::panic::catch_unwind<tangram::train::train::{closure#1}, core::result::Result<(tangram_core::model::Model, std::path::PathBuf), anyhow::Error>> () at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133
    #14 tangram::train::train () at crates/cli/train.rs:37
    #15 0x000055555590c001 in tangram::main () at crates/cli/main.rs:170
    

    I commented out the stuff in drop and got:

    โœ… Inferring train table columns. 0s 9ms
    โœ… Loading train table. 0s 11ms
    โœ… Loading test table. 0s 35ms
    โœ… Shuffling. 0s 3ms
    โœ… Computing train stats. 0s 24ms
    โœ… Computing test stats. 0s 77ms
    โœ… Finalizing stats. 0s 50ms
    ๐Ÿ Computing baseline metrics. 218421 / 230150 95% 0s 15ms elapsed 0ms remaining
    [==========================================================================>   ]
    error: panicked at 'called `Result::unwrap()` on an `Err` value: SendError { .. }', crates/cli/train.rs:163:14
       0: tangram::train::train::{{closure}}
                 at /home/grayshade/tangram/crates/cli/train.rs:34:40
       1: std::panicking::rust_panic_with_hook
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:610:17
       2: std::panicking::begin_panic_handler::{{closure}}
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:502:13
       3: std::sys_common::backtrace::__rust_end_short_backtrace
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:139:18
       4: rust_begin_unwind
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:498:5
       5: core::panicking::panic_fmt
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:107:14
       6: core::result::unwrap_failed
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/result.rs:1613:5
       7: core::result::Result<T,E>::unwrap
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/result.rs:1295:23
          tangram::train::ProgressThread::send_progress_event
                 at /home/grayshade/tangram/crates/cli/train.rs:159:3
          tangram::train::train::{{closure}}::{{closure}}
                 at /home/grayshade/tangram/crates/cli/train.rs:96:5
       8: tangram_core::train::train_grid_item::{{closure}}
                 at /home/grayshade/tangram/crates/core/train.rs:1031:3
       9: tangram_core::train::train_linear_regressor::{{closure}}
                 at /home/grayshade/tangram/crates/core/train.rs:1284:3
      10: tangram_linear::multiclass_classifier::MulticlassClassifier::train
                 at /home/grayshade/tangram/crates/linear/multiclass_classifier.rs:132:3
      11: tangram_core::train::train_linear_multiclass_classifier
                 at /home/grayshade/tangram/crates/core/train.rs:1484:21
          tangram_core::train::train_model
                 at /home/grayshade/tangram/crates/core/train.rs:1233:8
          tangram_core::train::train_grid_item
                 at /home/grayshade/tangram/crates/core/train.rs:1030:27
          tangram_core::train::Trainer::train_grid::{{closure}}
                 at /home/grayshade/tangram/crates/core/train.rs:252:5
          core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &mut F>::call_once
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ops/function.rs:280:13
      12: core::option::Option<T>::map
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/option.rs:846:29
          <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::next
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/iter/adapters/map.rs:103:9
          <alloc::vec::Vec<T> as alloc::vec::spec_from_iter_nested::SpecFromIterNested<T,I>>::from_iter
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/vec/spec_from_iter_nested.rs:23:32
          <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/vec/spec_from_iter.rs:33:9
      13: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/vec/mod.rs:2549:9
          core::iter::traits::iterator::Iterator::collect
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/iter/traits/iterator.rs:1745:9
          tangram_core::train::Trainer::train_grid
                 at /home/grayshade/tangram/crates/core/train.rs:246:33
      14: tangram::train::train::{{closure}}
                 at /home/grayshade/tangram/crates/cli/train.rs:100:33
      15: std::panicking::try::do_call
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40
          std::panicking::try
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19
          std::panic::catch_unwind
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14
          tangram::train::train
                 at /home/grayshade/tangram/crates/cli/train.rs:37:15
      16: tangram::main
                 at /home/grayshade/tangram/crates/cli/main.rs:170:30
      17: core::ops::function::FnOnce::call_once
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ops/function.rs:227:5
          std::sys_common::backtrace::__rust_begin_short_backtrace
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:123:18
      18: std::rt::lang_start::{{closure}}
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:145:18
      19: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ops/function.rs:259:13
          std::panicking::try::do_call
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40
          std::panicking::try
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19
          std::panic::catch_unwind
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14
          std::rt::lang_start_internal::{{closure}}
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:128:48
          std::panicking::try::do_call
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40
          std::panicking::try
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19
          std::panic::catch_unwind
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14
          std::rt::lang_start_internal
                 at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:128:20
      20: main
      21: __libc_start_call_main
      22: __libc_start_main@GLIBC_2.2.5
      23: _start
    
    bug 
    opened by lnicola 11
  • Support piping training data through stdin

    Support piping training data through stdin

    $ cat heart_disease.csv | tangram train --target diagnosis
    error: panicked at 'internal error: entered unreachable code', crates/core/train.rs:85:18
       0: backtrace::capture::Backtrace::new
       1: tangram::train::train::{{closure}}
       2: std::panicking::rust_panic_with_hook
       3: std::panicking::begin_panic_handler::{{closure}}
       4: std::sys_common::backtrace::__rust_end_short_backtrace
       5: _rust_begin_unwind
       6: core::panicking::panic_fmt
       7: core::panicking::panic
       8: tangram_core::train::Trainer::prepare
       9: tangram::main
      10: std::sys_common::backtrace::__rust_begin_short_backtrace
      11: _main
    

    Not sure this is actually supported, but since stdin works for "tangram predict" it would be less surprising to support stdin for train as well.

    Also allows training integration in other software without having to write temporary CSV file to disk or passing compressed CSV directly (gunzip heart_disease.csv.gz | tangram train --target diagnosis) or transform CSV on the fly (eg. https://github.com/tangramdotdev/tangram/issues/35#issuecomment-913948443)

    enhancement good first issue 
    opened by comunidadio 10
  • Issue running cli from Docker

    Issue running cli from Docker

    When I tried to run tangram cli through Rust's Command from Docker container, there will be the following error:

    error: No such device or address (os error 6)
    

    Sample project: https://github.com/joelchen/train

    opened by joelchen 6
  • Optimizing the size of the model

    Optimizing the size of the model

    Which hyperparameters are the most important ones for minimizing the size of a Gradient Boosted Tree model? From my experiments so far, it seems like min_examples_per_node and max_rounds have the biggest effect.

    opened by vks 6
  • Thread 'main' panicked at 'called `Option::unwrap()` on a `None` value'

    Thread 'main' panicked at 'called `Option::unwrap()` on a `None` value'

    Happened in one of my applications:

    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /Users/joelchen/.cargo/git/checkouts/tangram-cb663c32440b0d24/443190a/crates/core/predict.rs:977:71
    stack backtrace:
       0:        0x1052c780c - std::backtrace_rs::backtrace::libunwind::trace::h449592924b3bd63f
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
       1:        0x1052c780c - std::backtrace_rs::backtrace::trace_unsynchronized::ha2aaeafed0c31c90
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
       2:        0x1052c780c - std::sys_common::backtrace::_print_fmt::h58db85a17304976f
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/sys_common/backtrace.rs:66:5
       3:        0x1052c780c - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h10cf06316d33e2a9
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/sys_common/backtrace.rs:45:22
       4:        0x1052e5454 - core::fmt::write::h1faf18c959c3a8df
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/fmt/mod.rs:1190:17
       5:        0x1052c1454 - std::io::Write::write_fmt::h86ab231360bc97d2
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/io/mod.rs:1657:15
       6:        0x1052c9f58 - std::sys_common::backtrace::_print::h771b4aab9b128422
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/sys_common/backtrace.rs:48:5
       7:        0x1052c9f58 - std::sys_common::backtrace::print::h637de99a9f76e8a7
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/sys_common/backtrace.rs:35:9
       8:        0x1052c9f58 - std::panicking::default_hook::{{closure}}::h36e628ffaf3cd44f
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:295:22
       9:        0x1052c9bd0 - std::panicking::default_hook::h3ee1564a7544e58f
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:314:9
      10:        0x1052ca5ec - std::panicking::rust_panic_with_hook::h191339fbd2fe2360
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:702:17
      11:        0x1052ca308 - std::panicking::begin_panic_handler::{{closure}}::h91c230befd9929e3
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:586:13
      12:        0x1052c7cf4 - std::sys_common::backtrace::__rust_end_short_backtrace::haaaeebb1d37476b3
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/sys_common/backtrace.rs:138:18
      13:        0x1052ca07c - rust_begin_unwind
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:584:5
      14:        0x1053064b0 - core::panicking::panic_fmt::h4fe1013b011ef602
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/panicking.rs:143:14
      15:        0x1053063cc - core::panicking::panic::he60bb304466ccbaf
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/panicking.rs:48:5
      16:        0x104ff6fa0 - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &mut F>::call_once::h5d7b5ec3ed46c84a
      17:        0x104f6cb84 - <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter::hba64e4f67e876890
      18:        0x104ff0c3c - modelfox_core::predict::predict::h85cadbfd39ca5d40
      19:        0x104e11900 - modelfox::Model<Input,Output>::predict_one::hc081ee438df594f9
      20:        0x104ef9558 - application::Application::predict::{{closure}}::hc8ddec271ead1da4
      21:        0x104f1ca60 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hfdacf722533753d0
      22:        0x104f1a3bc - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h670e4eceed9f117c
      23:        0x104f33f40 - tokio::park::thread::CachedParkThread::block_on::h980578f8d05e977f
      24:        0x104f38228 - tokio::runtime::thread_pool::ThreadPool::block_on::h32a7c6b75fb06898
      25:        0x104e107bc - application::main::hc98c4c7340fe37ac
      26:        0x104ec9058 - std::sys_common::backtrace::__rust_begin_short_backtrace::h372f6a20aa0b8a8f
      27:        0x104f4b264 - std::rt::lang_start::{{closure}}::hf6364dd20d0f1090
      28:        0x1052c70b8 - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::h8eb3ac20f80eabfa
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ops/function.rs:259:13
      29:        0x1052c70b8 - std::panicking::try::do_call::ha6ddf2c638427188
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:492:40
      30:        0x1052c70b8 - std::panicking::try::hda8741de507c1ad0
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:456:19
      31:        0x1052c70b8 - std::panic::catch_unwind::h82424a01f258bd39
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panic.rs:137:14
      32:        0x1052c70b8 - std::rt::lang_start_internal::{{closure}}::h67e296ed5b030b7b
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/rt.rs:128:48
      33:        0x1052c70b8 - std::panicking::try::do_call::hd3dd7e7e10f6424e
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:492:40
      34:        0x1052c70b8 - std::panicking::try::ha0a7bd8122e3fb7c
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:456:19
      35:        0x1052c70b8 - std::panic::catch_unwind::h809b0e1092e9475d
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panic.rs:137:14
      36:        0x1052c70b8 - std::rt::lang_start_internal::h358b6d58e23c88c7
                                   at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/rt.rs:128:20
      37:        0x104e10aec - _main
    
    opened by joelchen 5
  • Unable to build aarch64-unknown-linux-gnu target

    Unable to build aarch64-unknown-linux-gnu target

    cargo build --release with the latest master branch of tangram on AArch64 architecture will fail with the error:

    error: could not compile `tangram_dev`
    
    Caused by:
      process didn't exit successfully: `rustc --crate-name tangram_dev --edition=2021 crates/dev/main.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C lto -C metadata=5127925cb15b9122 -C extra-filename=-5127925cb15b9122 --out-dir /home/ubuntu/tangram/target/release/deps -L dependency=/home/ubuntu/tangram/target/release/deps --extern clap=/home/ubuntu/tangram/target/release/deps/libclap-23bfe18d31f92ad3.rlib --extern sunfish=/home/ubuntu/tangram/target/release/deps/libsunfish-d04b16249dcff1ef.rlib --extern tokio=/home/ubuntu/tangram/target/release/deps/libtokio-df6922999aa345c4.rlib` (signal: 9, SIGKILL: kill)
    warning: build failed, waiting for other jobs to finish...
    error: build failed
    

    Rust target and toolchain for AArch64 are installed:

    $ rustup show
    Default host: aarch64-unknown-linux-gnu
    rustup home:  /home/ubuntu/.rustup
    
    installed targets for active toolchain
    --------------------------------------
    
    aarch64-unknown-linux-gnu
    wasm32-unknown-unknown
    
    active toolchain
    ----------------
    
    stable-aarch64-unknown-linux-gnu (default)
    rustc 1.57.0 (f1edd0429 2021-11-29)
    

    Machine is ARM64 based Ubuntu:

    $ uname -a
    Linux ip-10-0-0-13 5.13.0-1005-aws #6-Ubuntu SMP Fri Oct 8 07:40:12 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
    
    bug 
    opened by joelchen 5
  • What kind of models and training methods are used by tangram?

    What kind of models and training methods are used by tangram?

    I am trying to get to know what kind of models and training methods are used by tangram. However, I struggle to find any information. The help message from tangram train --help says nothing on that. I can only see that tangram is training 8 models. In the About info page I can read "... of the gradient boosted decision tree algorithm.", but this information is also very rudimentary.

    documentation 
    opened by m-kru 5
  • App identifier dosent match identifier in log_prediction

    App identifier dosent match identifier in log_prediction

    Good evening!

    I recently tried to push model predictions into an application and discovered that identifiers I specified in log_prediction function did not match identifiers that appeared in the application. I use python3.9 and tangram==0.7.0.

    Here are screenshots:

    Python code

    ะกะฝะธะผะพะบ ัะบั€ะฐะฝะฐ 2022-01-11 ะฒ 23 28 51

    Event in the app. Here it says that prediction identifier is the same as in python code, but actual identifier for this prediction is 3237cbe3097c87f5e73b7ce796f38be7, as stated in the its link. And if I want to use log_true_value for this prediction, I have to use 3237cbe3097c87f5e73b7ce796f38be7 but not the original id

    ะกะฝะธะผะพะบ ัะบั€ะฐะฝะฐ 2022-01-11 ะฒ 23 28 58 bug 
    opened by komatded 4
  • Allow fixing a single parameter value

    Allow fixing a single parameter value

    I have two potential use cases where I'd like to fix a single parameter value, but otherwise get the full default parameter grid. If I understand correctly, currently I'd have to specify that full grid in quite verbose JSON, which is a bit much.

    1. For the data I have (lots of dummy variables), both linear and GBDT models do well and, perhaps depending on the downsampling stochastics, sometimes I get a linear model as best, sometimes GBDT. I'd like to fix that so that only linear models are tried (or only GBDT), because I don't want a discontinuity in production of having now a linear model, a month later GBDT, then linear again, etc.
    2. If I oversample (which I'm not currently doing), I believe I need to fix the min_examples_per_node to more than the upsampling replication count. I'd like to set that but otherwise get the default grid.

    If you want to keep the CLI simple, having these via Python (#12) would be fine for me too.

    enhancement design 
    opened by otsaw 4
  • Strange behavior when importing JS library

    Strange behavior when importing JS library

    After I install via npm install @tangramdotdev/tangram and attempt to import using let tangram = require("@tangramdotdev/tangram") and follow the remainder of the tutorial steps

    I can run the script and get a working prediction as expected. However I am noticing some strange behaviors.

    A) - Eslint is not happy with the import. It always gives me "Unable to resolve path to module '@tangramdotdev/tangram'.eslintimport/no-unresolved"

    B) Even though the script will execute and work as expected, I can not test any function that imports the tangram library. Jest always throws the "Cannot find module '@tangramdotdev/tangram' from 'index.js' "

    I've never observed this before in any other library. Any guidance? I've included some screenshots:

    Working script: (Note the ESLINT error on import) image Ignore blocks at line 13 and 161 - Just some data marshalling / cleaning to get it in the right shape.

    Working result: (after directly uncommenting line 177 in above image to run directly) image

    A simple jest test of above foo function:

    const foo = require("./index")
    
    test("foo", () => {
        expect(foo()).toEqual({})
    })
    

    But when I run above test: image

    It's so baffling to me that I can run the script and all works as normal, but ESLINT and Jest are both throwing fits.

    opened by jakelowen 4
  • datasets are not downloadable anymore

    datasets are not downloadable anymore

    As stated here: https://github.com/modelfoxdotdev/modelfox/blob/8e7bd80a636b476913211f447255059a09df4878/crates/tree/benchmarks/README.md?plain=1#L12-L18

    The subdomain is not there anymore:

    curl: (6) Could not resolve host: datasets.modelfox.dev
    
    opened by sassman 0
  • [Ruby] Does not work for M1 Mac OSX

    [Ruby] Does not work for M1 Mac OSX

    Add the gem

    gem 'modelfox'
    bundle
    
    Stacktrace
    /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/ffi-1.15.5/lib/ffi/library.rb:145:in `block in ffi_lib': Could not open library '/Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/libmodelfox/x86_64-apple-darwin/libmodelfox.dylib': dlopen(/Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/libmodelfox/x86_64-apple-darwin/libmodelfox.dylib, 0x0005): tried: '/Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/libmodelfox/x86_64-apple-darwin/libmodelfox.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/libmodelfox/x86_64-apple-darwin/libmodelfox.dylib' (no such file), '/Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/libmodelfox/x86_64-apple-darwin/libmodelfox.dylib' (no such file) (LoadError)
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/ffi-1.15.5/lib/ffi/library.rb:99:in `map'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/ffi-1.15.5/lib/ffi/library.rb:99:in `ffi_lib'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/modelfox.rb:785:in `<module:LibModelFox>'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/modelfox.rb:768:in `<module:ModelFox>'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox/modelfox.rb:8:in `<main>'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bootsnap-1.11.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bootsnap-1.11.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/modelfox-0.8.0/lib/modelfox.rb:1:in `<main>'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bootsnap-1.11.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bootsnap-1.11.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bundler-2.3.10/lib/bundler/runtime.rb:60:in `block (2 levels) in require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bundler-2.3.10/lib/bundler/runtime.rb:55:in `each'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bundler-2.3.10/lib/bundler/runtime.rb:55:in `block in require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bundler-2.3.10/lib/bundler/runtime.rb:44:in `each'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bundler-2.3.10/lib/bundler/runtime.rb:44:in `require'
    	from /Users/amirsharif/.rbenv/versions/3.0.3/lib/ruby/gems/3.0.0/gems/bundler-2.3.10/lib/bundler.rb:176:in `require'
    	from /Users/amirsharif/Projects/halo/config/application.rb:7:in `<top (required)>'
    

    To fix this I renamed one of the dynamic library folders to x86_64-apple-darwin and then it started working.

    opened by Overload119 0
  • Bag of words - what is the delimiter?

    Bag of words - what is the delimiter?

    Consider a table:

    | target | words | | --- | --- | | 1 | This, That, And The Other | 0 | This | 1 | And The Other, That

    Am I using the commas to infer the bag of words correctly?

    opened by Overload119 3
  • Support zero-copy training from Python package

    Support zero-copy training from Python package

    I spent way too much time trying to understand the code base, but I am able to find my bearing now!

    In the first commit, I added a class method Model.train to the Python package, which allows users to do training through the Python package. However, it currently takes the CSV file as its argument and does not take Polars/Pandas data frames yet.

    opened by Chuxiaof 0
  • Debian package is not installable

    Debian package is not installable

    Trying to install the Debian Sid (unstable) package:

    $ aptitude install modelfox
    The following NEW packages will be installed:
      modelfox [0.8.0]  
    0 packages upgraded, 1 newly installed, 0 to remove and 76 not upgraded.
    Need to get 0 B/7โ€ฏ507 kB of archives. After unpacking 0 B will be used.
    Retrieving bug reports... Done           
    Parsing Found/Fixed information... Done
    dpkg-deb: error: archive '/var/cache/apt/archives/modelfox_0.8.0_amd64.deb' uses unknown compression for member 'control.tar.zst', giving up
    dpkg: error processing archive /var/cache/apt/archives/modelfox_0.8.0_amd64.deb (--unpack):
     dpkg-deb --control subprocess returned error exit status 2
    Errors were encountered while processing:
     /var/cache/apt/archives/modelfox_0.8.0_amd64.deb
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    

    Debian's dpkg doesn't seem to support zst: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892664

    opened by otsaw 1
  • Training error when column to predict has more than 100 variants

    Training error when column to predict has more than 100 variants

    When column to predict has more than 100 variants for multiclass classification, there is following error during training:

    โœ… Inferring train table columns. 6s
    โœ… Loading train table. 6s
    โœ… Shuffling. 0s 846ms
    โœ… Computing train stats. 10s
    โœ… Computing test stats. 2s
    โœ… Finalizing stats. 11s
    error: invalid target column type
    
    opened by joelchen 2
Releases(v0.8.0)
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models. Solve a variety of tasks with pre-trained models or finetune them in

Backprop 227 Dec 10, 2022
A Tools that help Data Scientists and ML engineers train and deploy ML models.

Domino Research This repo contains projects under active development by the Domino R&D team. We build tools that help Data Scientists and ML engineers

Domino Data Lab 73 Oct 17, 2022
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

null 152 Jan 2, 2023
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. 10x Larger Models 10x Faster Trainin

Microsoft 8.4k Dec 30, 2022
Exemplary lightweight and ready-to-deploy machine learning project

Exemplary lightweight and ready-to-deploy machine learning project

snapADDY GmbH 6 Dec 20, 2022
SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow

SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and workloads.

Cray Labs 139 Jan 1, 2023
The Simpsons and Machine Learning: What makes an Episode Great?

The Simpsons and Machine Learning: What makes an Episode Great? Check out my Medium article on this! PROBLEM: The Simpsons has had a decline in qualit

null 1 Nov 2, 2021
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

Hivemind: decentralized deep learning in PyTorch Hivemind is a PyTorch library to train large neural networks across the Internet. Its intended usage

null 1.3k Jan 8, 2023
A collection of interactive machine-learning experiments: ๐Ÿ‹๏ธmodels training + ๐ŸŽจmodels demo

?? Interactive Machine Learning experiments: ??๏ธmodels training + ??models demo

Oleksii Trekhleb 1.4k Jan 6, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 8, 2023
#30DaysOfStreamlit is a 30-day social challenge for you to build and deploy Streamlit apps.

30 Days Of Streamlit ?? This is the official repo of #30DaysOfStreamlit โ€” a 30-day social challenge for you to learn, build and deploy Streamlit apps.

Streamlit 53 Jan 2, 2023
The easy way to combine mlflow, hydra and optuna into one machine learning pipeline.

mlflow_hydra_optuna_the_easy_way The easy way to combine mlflow, hydra and optuna into one machine learning pipeline. Objective TODO Usage 1. build do

shibuiwilliam 9 Sep 9, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Vowpal Wabbit 8.1k Dec 30, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 3, 2022
MIT-Machine Learning with Pythonโ€“From Linear Models to Deep Learning

MIT-Machine Learning with Pythonโ€“From Linear Models to Deep Learning | One of the 5 courses in MIT MicroMasters in Statistics & Data Science Welcome t

null 2 Aug 23, 2022
Uber Open Source 1.6k Dec 31, 2022
Python library which makes it possible to dynamically mask/anonymize data using JSON string or python dict rules in a PySpark environment.

pyspark-anonymizer Python library which makes it possible to dynamically mask/anonymize data using JSON string or python dict rules in a PySpark envir

null 6 Jun 30, 2022