Web-interface + rest API for classification and regression (https://jeff1evesque.github.io/machine-learning.docs)

Overview

Machine Learning Build Status Coverage Status

This project provides a web-interface, as well as a programmatic-api for various machine learning algorithms.

Supported algorithms:

Contributing

Please adhere to contributing.md, when contributing code. Pull requests that deviate from the contributing.md, could be labelled as invalid, and closed (without merging to master). These best practices will ensure integrity, when revisions of code, or issues need to be reviewed.

Note: support, and philantropy can be inquired, to further assist with development.

Configuration

Fork this project, using of the following methods:

  • simple clone: clone the remote master branch.
  • commit hash: clone the remote master branch, then checkout a specific commit hash.
  • release tag: clone the remote branch, associated with the desired release tag.

Installation

To proceed with the installation for this project, users will need to decide whether to use the rancher ecosystem, or use docker-compose. The former will likely be less reliable, since the corresponding install script, may not work nicely across different operating systems. Additionally, this project will assume rancher as the primary method to deploy, and run the application. So, when using the docker-compose alternate, keep track what the corresponding endpoints should be.

If users choose rancher, both docker and rancher must be installed. Installing docker must be done manually, to fulfill a set of dependencies. Once completed, rancher can be installed, and automatically configured, by simply executing a provided bash script, from the docker quickstart terminal:

cd /path/to/machine-learning
./install-rancher

Note: the installation, and the configuration of rancher, has been outlined if more explicit instructions are needed.

If users choose to forgo rancher, and use the docker-compose, then simply install docker, as well as docker-compose. This will allow the application to be deployed from any terminal console:

cd /path/to/machine-learning
docker-compose up

Note: the installation, and the configuration of docker-compose, has been outlined if more explicit instructions are needed.

Execution

Both the web-interface, and the programmatic-api, have corresponding unit tests which can be reviewed, and implemented. It is important to remember, the installation of this application will dictate the endpoint. More specifically, if the application was installed via rancher, then the endpoint will take the form of https://192.168.99.101:XXXX. However, if the docker-compose up alternate was used, then the endpoint will likely change to https://localhost:XXXX, or https://127.0.0.1:XXXX.

Web Interface

The web-interface, can be accessed within the browser on https://192.168.99.101:8080:

web-interface

The following sessions are available:

  • data_new: store the provided dataset(s), within the implemented sql database.
  • data_append: append additional dataset(s), to an existing representation (from an earlier data_new session), within the implemented sql database.
  • model_generate: using previous stored dataset(s) (from an earlier
  • data_new, or data_append session), generate a corresponding model into
  • model_predict: using a previous stored model (from an earlier model_predict session), from the implemented nosql datastore, along with user supplied values, generate a corresponding prediction.

When using the web-interface, it is important to ensure the csv, xml, or json file(s), representing the corresponding dataset(s), are properly formatted. Dataset(s) poorly formatted will fail to create respective json dataset representation(s). Subsequently, the dataset(s) will not succeed being stored into corresponding database tables. This will prevent any models, and subsequent predictions from being made.

The following dataset(s), show acceptable syntax:

Note: each dependent variable value (for JSON datasets), is an array (square brackets), since each dependent variable may have multiple observations.

Programmatic Interface

The programmatic-interface, or set of API, allow users to implement the following sessions:

  • data_new: store the provided dataset(s), within the implemented sql database.
  • data_append: append additional dataset(s), to an existing representation (from an earlier data_new session), within the implemented sql database.
  • model_generate: using previous stored dataset(s) (from an earlier
  • data_new, or data_append session), generate a corresponding model into
  • model_predict: using a previous stored model (from an earlier model_predict session), from the implemented nosql datastore, along with user supplied values, generate a corresponding prediction.

A post request, can be implemented in python, as follows:

import requests

endpoint = 'https://192.168.99.101:9090/load-data'
headers = {
    'Authorization': 'Bearer ' + token,
    'Content-Type': 'application/json'
}

requests.post(endpoint, headers=headers, data=json_string_here)

Note: more information, regarding how to obtain a valid token, can be further reviewed, in the /login documentation.

Note: various data attributes can be nested in above POST request.

It is important to remember that the docker-compose.development.yml, has defined two port forwards, each assigned to its corresponding reverse proxy. This allows port 8080 on the host, to map into the webserver-web container. A similar case for the programmatic-api, uses port 9090 on the host.

Comments
  • Replace puppet 'docker' with containers

    Replace puppet 'docker' with containers

    Our intention with our docker containers, was to use them for unit testing our application as a whole, while testing the validity of the puppet scripts, used to build our development environment. However, this basis is not really valid, since we implement the docker puppet environment, which is beginning to change from the vagrant environment. This means, our docker containers are no longer checking the validity of puppet logic, used to build our development environment. And since the requirements of docker, and vagrant is not always a one-to-one relationship, we won't always be able to reuse the exact puppet script(s) between the vagrant and docker puppet environments.

    Additionally, running puppet in docker, is similarly flawed to #2932. Therefore, we will eliminate our puppet implementation, within our docker containers, used for unit testing. This means we'll remove entirely the docker puppet environment, create an equal number of dockerfile's, as the number of puppet modules defined in our vagrant puppet environment, and adjust our .travis.yml, respectively.

    readme build unit test documentation 
    opened by jeff1evesque 84
  • Implement reactjs + javascript unit tests

    Implement reactjs + javascript unit tests

    We need to write some initial reactjs + javascript unit tests, and integrate it into our travis ci builds.

    Note: this issue will leverage the results found from #3084.

    new feature build unit test frontend documentation 
    opened by jeff1evesque 72
  • Add bgc ensemble programmatic-api unit test

    Add bgc ensemble programmatic-api unit test

    We need to unit test the bgc ensemble method, and develop any backend dependency, to allow the programmatic-api, to cooperate with the desired unit test.

    readme new feature unit test documentation 
    opened by jeff1evesque 44
  • Refactor 'html_form.js' using reactjs framework

    Refactor 'html_form.js' using reactjs framework

    opened by jeff1evesque 30
  • Link docker containers to allow unit testing

    Link docker containers to allow unit testing

    This issue is a continuation of #2153. However, this issue will be more of a focus on accessing the environmental variables defined within the corresponding dockerfile. Specifically, we will use the environment variables to run the necessary docker logic. Then, we'll either conditionally run the unit tests (i.e. pytest), or the python application, based on the arguments supplied via the corresponding docker run command.

    enhancement build unit test 
    opened by jeff1evesque 26
  • Add 'data_new' programmatic interface (API)

    Add 'data_new' programmatic interface (API)

    We will create the necessary api to allow users to store a new dataset (in an SQL database), via a programmatic-interface, instead of a standalone web HTML5 interface.

    new feature 
    opened by jeff1evesque 24
  • Create flask upstart script in puppet manifest

    Create flask upstart script in puppet manifest

    We need to create start_webserver.pp. This script will be responsible for installing flask, and defining the necessary upstart script, to ensure that our flask server is running each time the Ubuntu Server has started.

    The following need to be removed (if present):

    • flask in $packages_flask_pip = ['flask', 'requests'] from install_packages.pp
    build 
    opened by jeff1evesque 22
  • Create necessary docker containers intended for unit testing

    Create necessary docker containers intended for unit testing

    We will create the necessary containers required to replicate our application, so it can be unit tested. Once completed, #2628, will link the containers, and perform the basic unit tests.

    Note: this issue has been adjusted from its original intention to build all necessary containers, and perform the corresponding unit testing. Now it has been segregated into two different issues.

    enhancement new feature build 
    opened by jeff1evesque 21
  • Ensure 'data_*.py' parses svr data into database

    Ensure 'data_*.py' parses svr data into database

    We need to ensure SVR datasets are properly parsed into the db_machine_learning database for the following two cases:

    • new data
    • append data

    This enhancement, needs to preside for both the web-interface, as well as the programmatic api. Also, #2587 needs to be resolved prior to this issue.

    bug enhancement readme new feature unit test 
    opened by jeff1evesque 19
  • Ensure flask logs errors to designated file

    Ensure flask logs errors to designated file

    Our previous working logic, in our factory.py:

        # log handler: requires the below logger
        formatter = logging.Formatter(
            "[%(asctime)s] {%(pathname)s:%(lineno)d} "
            "%(levelname)s - %(message)s"
        )
        handler = RotatingFileHandler(
            LOG_PATH,
            maxBytes=10000000,
            backupCount=5
        )
        handler.setLevel(HANDLER_LEVEL)
        handler.setFormatter(formatter)
        app.logger.addHandler(handler)
    
        # logger: complements the log handler
        log = logging.getLogger('werkzeug')
        log.setLevel(logging.DEBUG)
        log.addHandler(handler)
    
        # return
    return app
    

    No longer logs flask errors, even when making a deliberate trivial syntax error, anywhere in the application.

    bug enhancement readme build unit test documentation webserver 
    opened by jeff1evesque 17
  • Add backend logic to retrieve all prediction results

    Add backend logic to retrieve all prediction results

    We need to add necessary backend logic, which will be responsible for retrieving all predictions results, for given user. These results will be presented on the /session/results page.

    Note: this issue requires #2872.

    bug new feature unit test database 
    opened by jeff1evesque 16
  • Segregate rancher into another repository

    Segregate rancher into another repository

    The implementation of a local rancher instance is unnecessary. Additionally, upgrading to rancher 2.x involves the use of kubernetes. Refactoring the current rancher scripts to accomodate rancher is cumbersone. However, expecting each developer to undergo the use of rancher for a local development may be not be something achievable depending on their allotted resources. Therefore, this project should revert to the vanilla docker-compose implementation, then transfer the rancher implementation to another repository. This segregated repository could be a simple rancher-demonstration, analagous to the puppet-demonstration repository.

    opened by jeff1evesque 0
  • Fix jest errors

    Fix jest errors

    After 3-4 months, some dependencies have broke. Specifically, previous jest-enzyme tests broke. This is apparent when the previous master branch was labelled as passed, then became failed upon manually retriggering the same unchanged branch. Some efforts were made to fix this during the update to ubuntu 16.04. However, this bug is better appropriated with its own issue.

    bug unit test 
    opened by jeff1evesque 0
  • External 'requests.post' not working

    External 'requests.post' not working

    When issuing a requests.post from outside the corresponding docker unit tests, we get a 500 error. Specifically, the following was tested from a vagrant virtual machine:

    import requests
    
    username = 'xxxxxxxxxxxx'
    password = 'xxxxxxxxxxxx'
    endpoint = 'xxxxxxxxxxx'
    port = 8585
    headers = {
        'Content-Type': 'application/json',
    }
    
    login = requests.post(
        'https://{}:{}/login'.format(endpoint, port),
        headers=headers,
        data={'user[login]': username, 'user[password]': password},
        verify=False
    )
    token = login.json
    
    print('token: {}'.format(repr(token)))
    

    Then, we get the following error:

    root@ip-172-31-47-47:/home/ubuntu/ist-652# python3 test.py
    /usr/lib/python3/dist-packages/urllib3/connectionpool.py:794: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
      InsecureRequestWarning)
    token: <bound method Response.json of <Response [500]>>
    

    Note: we used the verify = False, since the corresponding application implements a self signed certificate. Additionally, the application redirects all http requests to https.

    bug 
    opened by jeff1evesque 8
  • Create association rules input component

    Create association rules input component

    We need to create a react component, allowing users to input parameters for the association rules:

    • metric type: confidence, lift
    • minimum metric value
    • number of rules to be returned
    new feature frontend 
    opened by jeff1evesque 1
Releases(0.7)
  • 0.7(Jun 12, 2018)

    This release involved some major projects, some requiring more effort than others. First, the vagrant development environment was replaced with the rancher orchestration. During this process, we created a single bash script called install_rancher. This script attempts to install a rancher server, then spins up docker containers, contained in a rancher stack. But, it was difficult to generalize this script across multiple different operating systems (i.e. windows 7/10, osx, linux).

    Due to limitations of resources, install_rancher was primarily developed within windows 7, and briefly tested on windows 10. In the upcoming milestone, we are likely going to modify this script to work within a flavor for linux. This way we can launch rancher on some internet hosting, with a webhook to the master branch. However, for the time being, users can opt to use the provided docker-compose.yml. If bugs are found with this method, please help us and report a bug. We are pushing towards getting rancher working. However, the docker-compose method should be a stable alternate approach.

    The next biggest accomplishment, mostly facilitates our development in upcoming milestones. More specifically, we have optimized our unit testing. This includes splitting up the linting, pytest, as well as frontend unit tests, to be segregated scripts. Essentially, each segment can be run manually in the local development environment. But, most importantly, we have improved the runtime on the travis ci, by running each script as a concurrent job. Previously, the entire travis build would take up to 21+ minutes. We have improved the same build, with additional package installation, to roughly 9 minutes. This also includes the several jest + enzyme frontend unit tests that have already been integrated as npm scripts, intended to be run within the browserify docker container.

    Our next accomplishment, really ties in with the first. During the process of dockerizing our vagrant build, we decided to have puppet be the method to provision our containers. Some arguments can be made here. But, ultimately, we like the idea of being able to enforce our environment state, especially if a container could run for an unknown amount of time. Therefore, our puppet modules were completely refactored, by cleverly implementing class variables, as well as hiera configurations. Many times the two choices, provided the same configuration options. These were put to good use, in the corresponding dockerfiles. On the same note, our pytests have been configured to allow users to choose, whether to build an environment based on local dockerfiles, or the equivalent dockerhub containers.

    Lastly, we did some minor frontend facelifting, as well as update the scikit-learn library to 0.19.1. The frontend improvements include a solid top navigation bar. When a user logs into the application, a black solid bar will exist at the top of the screen, and include a series of links, associated to the users account. Furthermore, we integrated bootstrap, to ensure the menu bar was responsive, as well as a couple of our other pages. We also added some cool range-sliders to our existing model_generate page, allowing users to slide a value, for a corresponding penalty, or gamma values, when generating an svm, or svr model. Then, we added a frontpage animation, at least until we have a better design. The animation was a pretty cool D3JS. However, it was a tedious process to convert the syntax to be reactjs compliant.

    We have focused largely on a standardizing the environment, and attempting to choose a set of technologies for this overall project. So, it's about time to bridge the various algorithms with either a web-interface, or a rest api endpoint. Now, we'll attempt to interface a variety of additional algorithms in the upcoming milestones. However, this will likely involve refactoring our database(s), so users can interface with proper permissions, and abilities to perform particular actions associated with the new algorithms.

    Another thing: we have improved our sphinx documentation, and launched on github pages:

    • https://jeff1evesque.github.io/machine-learning.docs/latest/index.html
    Source code(tar.gz)
    Source code(zip)
  • 0.6.1(Nov 20, 2017)

    This release encompasses issues pertaining to milestone 0.6.1.

    This short milestone has been motivated by the following bugs:

    • web-interface, returned 500 http errors, upon generating a model, using uploaded json dataset(s)
    • web-interface not properly logging into designated flask.log
    • libssl-dev package broke, during build, since minor version gets updated frequently
    • existing bagr datasets, incorrectly use classification style values

    Along the way of squashing the above, the following enhancements were made:

    • views.py has been split, to allow different flask blueprint implementations
    • two separate nginx reverse proxies regulate the gunicorn webservers
      • web-interface
      • rest programmatic-api
    • rest programmatic-api now requires all routes (except /login) to submit a valid token
      • existing unit tests have been updated respectively
    • corresponding README.md, as well as existing documentation have been updated
    Source code(tar.gz)
    Source code(zip)
  • 0.6(Nov 5, 2017)

    This release encompasses issues pertaining to milestone 0.6.

    This release has taken a significant amount of time, largely due to many important factors. First, the single mariadb database, has been split, allowing ML related datasets to be stored in mongodb. This was streamlined, to improve performance, and reduce complexity of the code. Now, users can supply datasets (i.e. json file), without needing to be parsed into several dedicated sql database tables. Additionally, anonymous users are limited to upload a maximum of 50 mongodb collections, while the authenticated users, are granted 150. To further on this, the sum of all collections, are allowed 10 (anonymous users), and 30 (authenticated users) documents. These values, can be configured, through the provided application.yaml, which will require the corresponding webserver(s) to be restarted.

    Also, our flask app, is wired up in such a way now, that a mariadb, and mongodb connection is always open, and ready for transactions. This is a better solution, since each client accessing the ML application, doesn't need to open up a new connection each time they perform an operation. This was more of a problem, when the database was restricted to a single mariadb, since corresponding sql transactions used to be very granular. If related questions come up, regarding the benefits of having connection pools (versus having a single open connection), we can briefly argue, to spin up a dedicated machine, containing another flask instance. However, this application, is not yet production grade.

    Additionally, major changes has occurred to help improve many security aspects of the application. For example, now the vagrant up build includes https://, as well as redis being implemented in place of the default, traditional cookie implementation. Users can /login, through the browser, and have their user information stored in redis, while having a randomized value, corresponding to their redis key, returned to them intrinsically as a cookie. This is better than sending an entire cookie, containing all of the user information. Similarly, users can now authenticate through the programmatic-api. Upon a successful post login, flask will return a token, which can be used on successive rest calls, to validate their session as a valid user.

    Also, our build process, of enforcing the installation of particular packages (across multiple package managers), has been dynamically streamlined, based on the definition of packages.yaml:

        ## iterate 'packages' hash
        $packages.each |String $provider, $providers| {
            if ($provider in ['apt', 'npm', 'pip']) {
                $providers['general'].each|String $package, String $version| {
                    package { $package:
                        ensure   => $version,
                        provider => $provider,
                        require  => [
                            Class['apt'],
                            Class['python'],
                            Class['package::nodejs'],
                            Class['package::python_dev']
                        ],
                    }
                }
            }
    }
    

    We've also completed many enhancements to the frontend, which is difficult to formally list. To put things short, we've begun (not entirely) to heavily use redux between various reactjs components. Also, two new minimal reactjs pages have been created. One dedicated to allow users to save a generated prediction result, through a minimal webform, on /session/current-result, and another to list all previously saved /session/results. Lastly, we have to give thanks to @Vitao18, for converting every jsx file's createClass, and corresponding constructor, to the native javascript syntax.

    Unit testing, has dramatically improved, in context of functionality, and resusability. We now have a single bash script, unit-tests, which contains all the necessary logic to build a sufficient testing environment, before tests are run against it. This allows the script to be used by our travis ci, along with the potential of running the test locally, even in our vagrant up build.

    You may wonder what the heck the heck the bgc, and bgr datasets are doing in this milestone. To answer that, you'll have to wait until milestone-0.9 is finally merged to the master branch. Many thanks also go out to @protojas, for helping expediate, our future milestone-0.9, with the ensemble learning models.

    Source code(tar.gz)
    Source code(zip)
  • 0.5(Jan 20, 2017)

    This release encompasses issues pertaining to milestone 0.5.

    The flask application has integrated gunicorn processes, with nginx serving as a reverse proxy server. This new feature has significantly enhanced performance. For example, identical unit tests now run about 2x faster than the previous default flask microframework (i.e. without uwsgi). This can be seen by comparing the unit test benchmark, located on our new pytest.rst page, with the 0.4 release statement.

    To tie together enhanced performance, various additional pytests have been added (or configured), along with the integration of coveralls. This particular tool is useful, since it indicates the degree, or percentage of lines of python code actually unit tested, within the entire application. A small visual representation has been added, to the main README.md, in the form of a badge, labelled as coverage.

    Additionally, necessary database tables were given indexes, to help improve query performances. Also, the previous tbl_feature_value was split into two database tables, to better organize the storing of the supplied dataset(s):

    • tbl_svm_data
    • tbl_svr_data

    On a similar topic of databases, necessary backend constructs were created in conjunction with the frontend react-redux, to store the userid of logged-in users, via the browsers internal sessionStorage. This allows the application the capability to validate a login attempt, and upon success, store the userid on the frontend, for a duration of a browser session. However, the login feature introduced scrypt (on the backend), a resource intensive algorithm, used to generate, and validate passwords. Because the implementation is resource expensive, we ensured our Vagrantfile allocated more than enough memory in the virtual machine.

    Note: the login feature lays the foundation of many issues assigned to milestone 0.6.

    With the integration of the login feature, the frontend required some adjustments, along with minor cosmetic touches. This involved the implementation of react-router, which generally enhances the user experience, by ensuring fixed urls, are associated with particular reactjs components.

    Of course we attempted to enhance the general build, and security for our overall application. So, a new custom ubuntu 14.04 vagrant box was created, on the atlas repository. By creating our own vagrant base box, we were able to generate a corresponding MD5 checksum, which is validated against, on each vagrant build. If the vagrant box changes the slightest amount, the corresponding checksum would change, and the build would not succeed, since there would be a mismatch on the MD5 checksum.

    Lastly, we decoupled some background information, from the main README.md into it's own dedicated project documentation/. In the future, this documentation/ will be generated into it's own dedicated website (possibly via sphinx), and serve as a primary hub, for visitors requiring particular how-to's.

    Source code(tar.gz)
    Source code(zip)
  • 0.4(Sep 15, 2016)

    This release encompasses issues pertaining to milestone 0.4.

    Now users can perform support vector regression (SVR) analysis (with returned r^2), while having the flexibility to choose which kernel to employ, both on the webform, or programmatic api:

    • linear
    • rbf
    • polynomial
    • sigmoid

    This flexibility is also made available for support vector machine (SVM) analysis, which now returns confidence level and decision function measures. Additionally, users can submit a url reference as their dataset, via the webform, or the programmatic api.

    To correspond to the above changes, we've had to refactor our flask implementation to include app factory notation. This allows the travis ci to leverage necessary components to perform automated unit testing, when code is committed within the github repository, as noted within the official docker wiki page, under How to incorporate python unit testing.... Specifically, both the manual, and automated unit testing now covers the additional SVR case, which can be executed manually:

    $ cd /path/to/machine-learning/
    $ vagrant up
    $ vagrant ssh
    vagrant@vagrant-ubuntu-trusty-64:~$ (cd /vagrant/test && pytest manual)
    ================================================= test session starts ==================================================
    platform linux2 -- Python 2.7.6, pytest-3.0.2, py-1.4.31, pluggy-0.3.1
    rootdir: /vagrant/test/manual, inifile: pytest.ini
    plugins: flask-0.10.0
    collected 16 items
    
    manual/programmatic_interface/dataset_url/pytest_svm_dataset_url.py ....
    manual/programmatic_interface/dataset_url/pytest_svr_dataset_url.py ....
    manual/programmatic_interface/file_upload/pytest_svm_file_upload.py ....
    manual/programmatic_interface/file_upload/pytest_svr_file_upload.py ....
    
    ============================================== 16 passed in 58.60 seconds ==============================================
    

    Also, some other changes have been implemented. For example, configurations stored within settings.py have been ported to several standardized yaml files (puppet also requires an intermediate hiera.yaml). This flexibility allows both the application, and provisioner (i.e. puppet) to utilize consistent constant application settings. Of course, we added yaml linting in the .travis.yml.

    Additionally, we increased flexibility of the Vagrantfile, such that vagrant destroy removes all cached files (including from pytest), and added a python Logger class, allowing exceptions to be logged into desired custom log files. Specifically, this added feature is intended to make debugging easier, since the flask application currently runs as a background service, which means typical error messages will print in an unseen background. Finally, the travis ci button at the top of the README.md is premised only on the master branch.

    Source code(tar.gz)
    Source code(zip)
  • 0.3(Apr 10, 2016)

    This release encompasses issues pertaining to milestone 0.3.

    All jquery code (including ajax), have been refactored into a combination of reactjs, fetch, and pure javacript. Correspondingly, eslint has been implemented, with necessary plugins to lint jsx templates.

    Also, existing upstart scripts were tightened, so only corresponding source file types are compiled. This prevents the compiler from producing an error, if an incorrect file type is placed within a corresponding directory. Finally, the upstart script responsible for compiling jsx templates into js (i.e. browserify), adds an entry of the compiled js filename, within .gitignore, if the corresponding entry did not already exist.

    Lastly, two major changes occurred with the puppet implementation. First, all logic has been streamlined into modules, rather than a slew of manifests. Second, the previous shell script puppet_updater.sh, responsible for updating puppet, now implements the vagrant-puppet-install plugin.

    Source code(tar.gz)
    Source code(zip)
  • 0.2(Nov 24, 2015)

    This release encompasses issues pertaining to milestone 0.2.

    Now, a programmatic-interface is provided along with the previous web-interface. On a build level, this release includes linting on all scripts, with the exception of puppet (erb) templates, and a handful of open source libraries, via .travis.yml. Bash scripts used for the webcompilers, were enhanced with syntax adjustments. These improvements guaranty source files are properly compiled to corresponding asset directories, during initial build, and successive source modification, when edited within the vagrant virtual machine.

    Also, high level unit tests can be run:

    $ /vagrant/test
    $ sudo pip install pytest
    $ py.test
    ============================= test session starts ==============================
    
    platform linux2 -- Python 2.7.6, pytest-2.8.3, py-1.4.30, pluggy-0.3.1
    rootdir: /vagrant/test, inifile: pytest.ini
    collected 4 items
    
    programmatic_interface/pytest_session.py ....
    
    =========================== 4 passed in 0.43 seconds ===========================
    

    Lastly, among various markdown enhancements, contribute.md has been created to integrate when issues are created, along with corresponding pull requests.

    Note: unit test(s) will be incorporated into the travis-docker container build, on a future release.

    Note: the remaining OS related problem associated with milestone 0.1 has been resolved.

    Source code(tar.gz)
    Source code(zip)
  • 0.1(Sep 9, 2015)

    This release encompasses issues pertaining to milestone 0.1.

    The web interface is currently limited on the clients browser, such that the clients OS sometimes cannot define csv, or json mime types for file upload(s). This means, only the xml file upload(s) can be guaranteed at the moment. However, the next release corresponding to milestone 0.2, will address this issue.

    Source code(tar.gz)
    Source code(zip)
Owner
Jeff Levesque
Jeff Levesque
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Daniel Seita 121 Dec 30, 2022
Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/

SemanticGAN This is the official code for: Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalizat

null 151 Dec 28, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Siavash Eliasi 33 May 31, 2022
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
A simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

this is a simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

crispengari 5 Dec 9, 2021
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 3, 2023