Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

Overview

Credo AI Lens


Lens by Credo AI - Responsible AI Assessment Framework

Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community. In short, Lens connects arbitrary AI models and datasets with Responsible AI tools throughout the ecosystem.

Lens can be run in a notebook, a CI/CD pipeline, or anywhere else you do your ML analytics. It is extensible, and easily customized to your organizations assessments if they are not supported by default.

Though it can be used alone, Lens shows its full value when connected to your organization's Credo AI Platform. Credo AI is an end-to-end AI Governance platform that supports multi-stakeholder alignment, AI assessment (via Lens) and AI risk assesssment.

Dependencies

  • Credo AI Lens supports Python 3.7+
  • Sphinx (optional for local docs site)

Installation

The latest stable release (and required dependencies) can be installed from PyPI. Note this installation only includes dependencies needed for a small set of modules

pip install credoai

To include additional dependencies needed for some modules and demos, use the following installation command:

pip install credoai[extras]

Getting Started

To get started, we suggest running the quickstart demo: demos/quickstart.ipynb. For a more detailed example, see demos/binaryclassification.ipynb

Documentation

To build the documentation locally, run make html from the /docs directory and the docs site will build to: docs/_build/html/index.html, which can be opened in the browser.

Make sure you have Sphinx installed if you are building the docs site locally.

Configuration

To connect to Credo AI's Governance Platform, enter your connection info in ~/.credoconfig (in the root directory) using the below format.

TENANT={tenant name} # Example: credoai
CREDO_URL=<your credo url>  # Example: https://api.credo.ai 
API_KEY=<your api key> # Example: JSMmd26...
Comments
  • FEAT/shap_evaluator

    FEAT/shap_evaluator

    Version 1 of the shap evaluator completed.

    Functionality:

    1. Output of summary statistics, currently mean and mean(|x|)
    2. Optional output of shap values for specific samples in the dataset
    opened by fabrizio-credo 8
  • NLPGeneratorAnalyzer extended for text attributes

    NLPGeneratorAnalyzer extended for text attributes

    Change description

    Extended NLPGeneratorAnalyzer for flexible text attributes assessment Debugged data paths Upgraded the demo

    Type of change

    • [x] Bug fix (fixes an issue)
    • [x] New feature (adds functionality)

    Related issues

    Fix #1

    Checklists

    Development

    • [ ] Lint rules pass locally
    • [ ] Application changes have been tested thoroughly
    • [ ] Automated tests covering modified code pass

    Security

    • [ ] Security impact of change has been considered
    • [ ] Code follows company security practices and guidelines

    Code review

    • [ ] Pull request has a descriptive title and context useful to a reviewer. Screenshots or screencasts are attached as necessary
    • [ ] "Ready for review" label attached and reviewers assigned
    • [ ] Changes have been reviewed by at least one other contributor
    • [ ] Pull request linked to task tracker where applicable
    opened by amrasekh 6
  • Docs/evaluator pages

    Docs/evaluator pages

    Added evaluator schema + evaluators pages

    • Restructured the docs directory to improve tidyness
    • Evaluator schema/how to: docs/pages/evaluiators/make_your_own.rst
    • Each evaluator has an autogenerated page, code for autogeneration: docs/autogeneration/pages/evaluators.py
    opened by fabrizio-credo 5
  • NLP Generator Upgrade

    NLP Generator Upgrade

    • Enabled multiple models run support and more customizability
    • Added prompts datasets with standard schema
    • Enabled reporting
    • Updated demo and documentations
    opened by amrasekh 5
  • Add test-reports workflow for develop and main with badges

    Add test-reports workflow for develop and main with badges

    Describe your changes

    test-reports.yml workflow will run on pushes to develop and main. It can also be triggered for other branches but the badges will not be updated. Badges are updated by pushing a generated badge to S3 that is referenced in the README.md. The README.md references the badges for the main branch but the URL to retrieve the badges for develop can be discovered easily.

    The permissions to write to the S3 bucket are limited to this workflow running on the develop and main branch.

    Examples of the badges generated for the last run on this branch before lockdown in AWS: Tests Coverage

    Issue ticket number and link

    This was a request in Slack.

    Known outstanding issues that are not fully accounted for

    Checklist before requesting a review

    • [x] I have performed a self-review of my code
    • [x] I have built basic tests for new functionality (particularly new evaluators)
    • [ ] If new libraries have been added, I have checked that readthedocs API documentation is constructed correctly
    • [ ] Will this be part of a major product update? If yes, please write one phrase about this update.

    Extra-mile Checklist

    • [ ] I have thought expansively about edge cases and written tests for them
    opened by credo-nate 4
  • Feat/multiclass metrics

    Feat/multiclass metrics

    Describe your changes

    Setting up model types to enable multiclass classification metric routing.

    Checklist before requesting a review

    • [x] I have performed a self-review of my code
    • [ ] I have built basic tests for new functionality (particularly new evaluators)
    • [ ] If new libraries have been added, I have checked that readthedocs API documentation is constructed correctly
    • [ ] Will this be part of a major product update? If yes, please write one phrase about this update.

    Extra-mile Checklist

    • [ ] I have thought expansively about edge cases and written tests for them
    opened by IanAtCredo 4
  • Feat/prism

    Feat/prism

    Prototype for Prism object

    Overall folder/file structure

    • comparators: Contains the basic logic to compare different type of results. Comparators can be a function of the result container type, and potentially the evaluators

      • comparator.py: abstract class
      • metric_comparator.py: takes a list of Lens objects, extract metric containers and compares metrics across all the Lens objects. Comparisons are done respect to a reference Lens object. Currently the reference is defined by model name, to be changed in the future if necessary. @esherman-credo I simplified the logic, the all vs all comparison produced a lot of results that were probably not going to get used, we can reinstate it if needs be. I have added the possibility to select different operations.
    • compare.py: this is a type of task that prism can orchestrate. ATM it's only calling metric_comparator, in the future compare.Compare will instantiate the necessary comparators (table, metric, etc...) depending on the existing results containers.

    • prism.py: the orchestrator, this is the equivalent of Lens but at a higher level. The purpose of Prism is to coordinate the various tasks. Currently when the user instantiates it, they can pass a dictionary of tasks to perform on the Lens objects. The dictionary is in the format {'task_name:{'parameter':value}}, where parameters are specific to the type of task to perform.

    Missing

    • Docstrings in general
    • Abstract classes for task objects (like compare.Compare). I haven't created an abstract class for this yet, I am wondering if there is a need for it, or if the variety of the object will be too big for an abstract class to be useful.
    • Tests

    To test the run you can use the following code:

    
    from credoai.lens import Lens
    from credoai.artifacts import TabularData, ClassificationModel
    from credoai.evaluators import *
    from credoai.governance import Governance
    import numpy as np
    
    
    # imports for example data and model training
    from credoai.datasets import fetch_creditdefault
    from sklearn.ensemble import RandomForestClassifier
    from sklearn.model_selection import train_test_split
    from xgboost import XGBClassifier
    
    data = fetch_creditdefault()
    df = data["data"]
    df["target"] = data["target"].astype(int)
    
    # fit model
    model1 = RandomForestClassifier(random_state=42)
    X = df.drop(columns=["SEX", "target"])
    y = df["target"]
    sensitive_features = df["SEX"]
    (
        X_train,
        X_test,
        y_train,
        y_test,
        sensitive_features_train,
        sensitive_features_test,
    ) = train_test_split(X, y, sensitive_features, random_state=42)
    model1.fit(X_train, y_train)
    
    model2 = XGBClassifier(tree_method='hist', random_state=42, n_estimators=2, enable_categorical=True)
    model2.fit(X_train, y_train)
    
    
    
    # ## Set up artifacts
    
    credo_RF1 = ClassificationModel(
        'credit_RF1_classifier',
        model1
    )
    
    credo_XGB = ClassificationModel(
        'credit_XGB_classifier',
        model2
    )
    
    train_data = TabularData(
        name = "UCI-credit-default-train",
        X=X_train,
        y=y_train,
        sensitive_features=sensitive_features_train
    )
    test_data = TabularData(
        name='UCI-credit-default-test',
        X=X_test,
        y=y_test,
        sensitive_features=sensitive_features_test
    )
    
    
    # pipeline scan be specifed using a sklearn-like style
    metrics = ['accuracy_score', 'roc_auc_score']
    pipeline = [
        (Performance(metrics)),
        (ModelFairness(metrics)),
    ]
    
    pipeline2 = [
        (Performance(metrics)),
        (ModelFairness(metrics)),
    ]
    
    
    # and added to lens in one fell swoop
    lens_rf1 = Lens(
        model=credo_RF1,
        assessment_data=test_data,
        # training_data=train_data,
        pipeline=pipeline
    )
    
    
    
    lens_xgb = Lens(
        model=credo_XGB,
        assessment_data=test_data,
        # training_data=train_data,
        pipeline=pipeline2
    )
    
    
    from credoai.prism import Prism
    
    prism_test = Prism(
        [lens_rf1, lens_xgb], 
        tasks={'compare':{'ref':'credit_RF1_classifier'}}
    )
    
    prism_test.execute()
    
    prism_test.get_results()
    
    
    
    enhancement 
    opened by fabrizio-credo 4
  • Specifying multiple data sets when creating lens object causes each evaluator to only be run once

    Specifying multiple data sets when creating lens object causes each evaluator to only be run once

    Expected Behavior

    Lens should return results for each specified dataset, and for each evaluator (e.g. 2 data sets + 1 evaluator --> 2 sets of results).

    Actual Behavior

    Lens is currently only returning one result set when both assessment_data and training_data are passed to the Lens() constructor. It appears that only the results from the training_data are processed (likely because that parameter is listed 2nd?)

    Creating data objects as follows:

    credo_model = ClassificationModel(
        'credit_default_classifier',
        model
    )
    train_data = TabularData(
        name = "UCI-credit-default-train",
        X=X_train,
        y=y_train,
        sensitive_features=sensitive_features_train
    )
    test_data = TabularData(
        name='UCI-credit-default-test',
        X=X_test,
        y=y_test,
        sensitive_features=sensitive_features_test
    )
    

    Creating pipeline and Lens object as follows

    # pipeline scan be specifed using a sklearn-like style
    metrics = ['accuracy_score', 'roc_auc_score']
    pipeline = [
        (Performance(metrics), 'Performance Assessment'),
        (ModelFairness(metrics), "Fairness Assessment"),
    ]
    
    
    # and added to lens in one fell swoop
    lens = Lens(
        model=credo_model,
        assessment_data=test_data,
        training_data=train_data,
        pipeline=pipeline
    )
    
    lens.run()
    

    Expected Results: dictionary with 2 sets each of Performance and ModelFairness results. Running as above yields results that look like this (not enough entries and entries are clearly on training_data since performance is so strong): image

    Commenting out the line that passes train_data to Lens() constructor yields performance that looks like this: image

    bug 
    opened by esherman-credo 4
  • Enabling multiple sensitive features support

    Enabling multiple sensitive features support

    This PR aims to enable support of multiple sensitive features in Lens. This is expected to require updates across the module and this working PR covers these updates as they are implemented.

    opened by amrasekh 4
  • Ian/airgap

    Ian/airgap

    Change description

    • Air gapped documentation added to governance notebook.
    • Increased functionality of CredoGovernance object. It now can:
      • Handle registration of models and projects
      • Handle retrieving spec from governance platform
      • Handle retrieving spec from file
    • Documentation for explicit registration of models updated to reflect use of credogovernance
    • Better warning, exception and verbose functionality for Lens
    • dataset registration
    opened by IanAtCredo 4
  • Feat/data fairness simplification

    Feat/data fairness simplification

    Describe your changes

    Extracted run_cv and get demo_parity from data fairness

    Issue ticket number and link

    Known outstanding issues that are not fully accounted for

    Checklist before requesting a review

    • [x] I have performed a self-review of my code
    • [ ] I have built basic tests for new functionality (particularly new evaluators)
    • [ ] If new libraries have been added, I have checked that readthedocs API documentation is constructed correctly
    • [ ] Will this be part of a major product update? If yes, please write one phrase about this update.

    Extra-mile Checklist

    • [ ] I have thought expansively about edge cases and written tests for them
    opened by fabrizio-credo 3
  • Ranking fairness refactoring

    Ranking fairness refactoring

    Refactored Ranking Fairness evaluators

    • Generic metrics moved to metrics_credoai.py
    • FINS metrics refactored to helper function
    • Reformatted some codes and documentations
    opened by amrasekh 1
  • Feat/image data stable

    Feat/image data stable

    Describe your changes

    This is a feature branch implementing support for tensor-based data and neural network models. For Credo-internal developers, the proposed changes are detailed here.

    Summary of Changes (in progress, more to come):

    • BaseModel
      • Group init functionality in process_model function
      • Split validation functionality into 2 functions:
        1. Checks that required callable functions are present (existing functionality)
        2. Checks that various details of the framework match up with the model wrapper type; throws warnings rather than errors
    • ClassificationModel
      • Add support for Keras-style classifiers (highly restrictive, w/ warnings if assumptions are not met)
        • Assume Sequential-style model w/ Dense last (or 2nd to last) layer + (softmax or sigmoid)
        • Implement __post_init__() functionality for keras predictions
          • Depends on sigmoid (label outputs; no predict_proba) vs. softmax (probability outputs; predict_proba by default -> can infer predict) -DummyClassifier
      • Passing X + y artifacts to data wrapper when they're already wrapped
      • If X and y are separate in the user's non-Lens workflow, passing these to Lens will be done in the same way as for TabularData
      • If X and y are wrapped in one object (such as a tf.data or keras.utils.Sequence object) then passing them to the Lens wrapper requires separating them
      • Change: Add ability to pass model_like to DummyClassifier
        • This does not fully address usability issue above (e.g., if user wants to run dataset assessments on X and y)
        • Allows user to avoid excessive computation by Lens, while still retaining model details for, e.g., the ModelProfiler evaluator
    • Base Data
      • Converted process functions to non-abstract (committed)
        • Can support tensor data (from Keras + TF, at least; see supported inputs to predict here) with base Data class
        • Working example of data-on-disk model fitting + Lens evaluation exists (not pushed to GH) Evaluator Validation
    • Security evaluator needed to be refactored (a bit) to use ART's TensorflowV2Classifier rather than their KerasClassifier
      • The latter does not support eager execution
      • The former is ART's current workaround -> eager execution is important for overall TF/Keras support (allows running things on-demand rather than as a batched graph execution)
      • Tests passing

    Lens Validation Validation of Model + Data --> At Lens init stage, we now verify that predict, predict_proba, and compare (whichever are relevant to the provided model) works for the provided data. Throws an error and prevents instantiating/running evaluators without first checking Model + Data compatibility.

    Evaluator Validation Established a starting point for streamlining/unifying artifact checking. Converted check_artifact_for_nulls to check_data... (name reflects what it's doing) and added options to only check some parts of the artifact (i.e., a subset of X, y, and sensitive_features) for nulls rather than checking all parts. Functionality doesn't fundamentally change but makes requirements more explicit from function arguments: check_X, check_y, and check_sens (all boolean).

    • If datatype is completely arbitrary or generator-like, we have no way of checking; need the user to do this before wrapping
    • This revised null-checker expands capabilities to several non-Pandas types

    Issue ticket number and link

    https://credo-ai.atlassian.net/browse/DSP-344

    Known outstanding issues that are not fully accounted for

    • Need to fix/merge the Keras support in the ModelProfiler
    • Tests! Tests! Tests!
    • Confirm documentation is properly built

    Checklist before requesting a review

    • [x] I have performed a self-review of my code
    • [ ] I have built basic tests for new functionality (particularly new evaluators)
    • [ ] If new libraries have been added, I have checked that readthedocs API documentation is constructed correctly
    • [x] Will this be part of a major product update? If yes, please write one phrase about this update.
      • See above. This expands Lens functionality to support Keras models (and, by default, some TF models). This remains relatively experimental -> the space of models and data that "work" is likely much larger than what we have explicitly developed for. We have tried to implement warnings where possible. Some growing pains are likely.

    Extra-mile Checklist

    • [ ] I have thought expansively about edge cases and written tests for them
    opened by esherman-credo 2
  • Bugfix/label subtypes

    Bugfix/label subtypes

    Describe your changes

    Modified DataFairness and Privacy evaluators to create results as several Metric containers (one for each row in pre-results DataFrame). This allows passing subtype information as an additional label to the Container constructor (and subsequently the Evidence constructor).

    Opening the PR now, as this fixes the two evaluators where this is known to be an issue. ~Going to look at other evaluators to see whether or not this issue is restricted to DataFairness + Privacy.~

    UPDATE: This or a similar issue also applies to ~IdentityVerification, and RankingFairness.~ It is possible (have not confirmed, but I think it's unlikely) that it also applies to ~Equity and~ SurvivalFairness.

    Security has a subtype thing going on, but the metrics are uniquely identified by the metric_type field...there are only 2 metrics. Equity is fine. IdentityVerification is confirmed to be fine. All metrics/tables created by the evaluator lead to unique labels without further mangling. RankingFairness seems fine for now.

    Final update: no existing test cases for Survival. I'm not going to test this right now to see what the label outputs look like Other evaluators named above have been run & checked to determine what they're labeling scheme looks like.

    Issue ticket number and link

    Fixes https://github.com/credo-ai/credoai_lens/issues/265

    Known outstanding issues that are not fully accounted for

    See above comment about other evaluators. Reviewer can refuse to approve until confirmation that no other evaluators are subject to this issue.

    Checklist before requesting a review

    • [x] I have performed a self-review of my code
    • [ ] I have built basic tests for new functionality (particularly new evaluators) N/A
    • [ ] If new libraries have been added, I have checked that readthedocs API documentation is constructed correctly N/A
    • [ ] Will this be part of a major product update? If yes, please write one phrase about this update. No

    Extra-mile Checklist

    • [x] I have thought expansively about edge cases and written tests for them No new tests. This bug was identified as part of implementation of broader Platform+Lens+Connect integration test, which will catch this sort of bug once online.
    opened by esherman-credo 3
  • Lens labels are non-unique for some evaluators, where evidence is further differentiated by subtypes in metadata

    Lens labels are non-unique for some evaluators, where evidence is further differentiated by subtypes in metadata

    Expected Behavior

    When Lens is matching evidences to requirements, it checks the label of the Lens-generate evidence against the label specified by a policy pack. Lens' labeling should, theoretically, be unique such that a proper labeling in the policy pack will correspond to Lens outputs.

    Actual Behavior

    Evaluators output unique evidence, however sometimes the uniqueness is encoded in evidence metadata rather than in the label directly. This causes Lens error messages saying multiple evidences satisfy a given requirement.

    For example, consider the following error message: image

    This corresponds to the following evidences in the outputted JSON file:

    {
              "type": "metric",
              "label": {
                "metric_type": "MembershipInference",
                "evaluator": "Privacy"
              },
              "data": {
                "value": 0.5922,
                "confidence_interval": null,
                "confidence_level": null
              },
              "generated_at": "2022-11-30T17:01:02.771421",
              "metadata": {
                "subtype": "BlackBoxRuleBased",
                "evaluator": "Privacy",
                "model_name": "credit_default_classifier",
                "assessment_dataset_name": "UCI-credit-test",
                "training_dataset_name": "UCI-credit-train"
              }
            },
            {
              "type": "metric",
              "label": {
                "metric_type": "MembershipInference",
                "evaluator": "Privacy"
              },
              "data": {
                "value": 0.5674666666666667,
                "confidence_interval": null,
                "confidence_level": null
              },
              "generated_at": "2022-11-30T17:01:02.771446",
              "metadata": {
                "subtype": "BlackBox",
                "evaluator": "Privacy",
                "model_name": "credit_default_classifier",
                "assessment_dataset_name": "UCI-credit-test",
                "training_dataset_name": "UCI-credit-train"
              }
            },
            {
              "type": "metric",
              "label": {
                "metric_type": "MembershipInference",
                "evaluator": "Privacy"
              },
              "data": {
                "value": 0.5922,
                "confidence_interval": null,
                "confidence_level": null
              },
              "generated_at": "2022-11-30T17:01:02.771467",
              "metadata": {
                "subtype": "attack_score",
                "evaluator": "Privacy",
                "model_name": "credit_default_classifier",
                "assessment_dataset_name": "UCI-credit-test",
                "training_dataset_name": "UCI-credit-train"
              }
            },
    

    The subtype field in the metadata is sufficient to uniquely identify the 3 different evidences, while the information in the label field is not. (It's not clear that it will ALWAYS be the case that looking at subtype + label is enough to identify, there might be other fields...)

    This appears to be an issue for the Privacy and DataFairness evaluators since these inherently calculate several metrics by default and they're stored in results dataframes where there is a column for the subtype.

    Potential solutions:

    1. Add subtype to the label by pulling from the metadata object (where it currently lives) here https://github.com/credo-ai/credoai_connect/blob/develop/connect/evidence/evidence.py#L95 <- Bad solution since this is inconsistent across evaluators...unless we want the entire label to be [present label] (union) [metadata]...still probably a bad idea.
    2. Pass extra labeling information to the evidence object when it's created here https://github.com/credo-ai/credoai_lens/blob/develop/credoai/evaluators/privacy.py#L169 <- better, but requires us to go check every evaluator for this sort of behavior and to remember to do this for any future evaluators
    3. Enable specifying evidence requirements in terms of the entire evidence object, not just the label. This would entail something like (on PP Editor) specifying labels: [metric_type: ..., evaluator: ...], metadata: [subtype: ...] <- this might be the best long-term solution since it provides a lot of flexibility for the policy pack creator to match arbitrary outputs from Lens (as long as the outputs from Lens are unique, which they presently are). Likely much more involved to implement.

    Solution 2 is probably the best for right now. Solution 3 should be on the radar as a future feature.

    bug 
    opened by esherman-credo 1
  • added validation and tests for survival fairnes (WIP)

    added validation and tests for survival fairnes (WIP)

    Change description

    Description here

    Type of change

    • [ ] Bug fix (fixes an issue)
    • [ ] New feature (adds functionality)

    Related issues

    Fix #1

    Checklists

    Development

    • [ ] Lint rules pass locally
    • [ ] Application changes have been tested thoroughly
    • [ ] Automated tests covering modified code pass

    Security

    • [ ] Security impact of change has been considered
    • [ ] Code follows company security practices and guidelines

    Code review

    • [ ] Pull request has a descriptive title and context useful to a reviewer. Screenshots or screencasts are attached as necessary
    • [ ] "Ready for review" label attached and reviewers assigned
    • [ ] Changes have been reviewed by at least one other contributor
    • [ ] Pull request linked to task tracker where applicable
    opened by IanAtCredo 0
  • "Summary" of results

    Desired Behavior

    Depending on the pipeline, lens can create many results which are hard to visualize. I would like two things:

    1. Some basic printout that is easy to read
    2. A summary perspective that prioritizes certain metrics that anyone who used an evaluator would care about. E.g., most people probably care about top-line performance, disaggregated performance, and parity metrics. Could we create a summary function?
    enhancement 
    opened by IanAtCredo 0
Releases(v1.1.2)
  • v1.1.2(Nov 11, 2022)

    What's Changed

    • Tests/run quickstart by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/228
    • Identity verification assessment by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/222
    • Main by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/231
    • modify quantization of roc and pr curve interpolation helpers to ensure adequately dense interpolation by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/233
    • Docs/list metrics by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/234
    • Feat/deepchecks by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/220
    • Identity Verification - tests and validations by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/235
    • 238 shallow copy issue creating lens objects with the same pipeline object causes the latter object to overwrite the results from the former by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/239
    • Bugfix/docs equity by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/241
    • Upgrading RankingFairness Evaluator by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/242
    • Modify get_model_info and update_functionality so that model frameworks are full string. Enables use of XGBoost within Credo Models by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/237
    • Release/1.1.2 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/243

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v1.1.1...v1.1.2

    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Nov 2, 2022)

  • v1.1.0(Oct 28, 2022)

    What's Changed

    • Feat/governance update by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/221
    • Feat/contextual tagging system by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/224
    • Fixing function to find all evaluator subclasses by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/225
    • Release/1.1.0 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/226

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v1.0.1...v1.1.0

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Oct 27, 2022)

    Note IDs have been removed from pipeline specification. Each pipeline list item should either be an evaluator or a tuple with (evaluator, metadata dictionary))

    What's Changed

    • Assessment data validation fix by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/204
    • Feat/gini coefficient by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/205
    • Bugfix/json by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/207
    • quick import fix by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/208
    • Feat/model profiler by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/202
    • FEAT/shap_evaluator by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/201
    • Feat/feature drift by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/211
    • Model zoo test freeze by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/209
    • Bug/210/shap error multi class by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/215
    • made 'name' attribute implicitly defined, added PipelineStep class, r… by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/214
    • Feat/vocalink requirements matching by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/216
    • Bugfix/generator by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/217
    • Updated metrics doc by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/219
    • Release/1.0.1 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/218

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v1.0.0...v1.0.1

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Oct 11, 2022)

    What's Changed

    Lens has had a large refactor. This is a breaking change.

    • Assessments and modules have been removed, streamlined into evaluators.
    • CredoData and CredoModel have been converted into more specialized artifacts. While base abstract classes "Model" and "data" remain, users will use children, like "TabularData" or "ClassificationModel"
    • Lens now makes use of a pipeline of evaluators that must be specified by the user or governance (via the Responsible AI Platform). No automated pipeline creation is available at this time.
    • Evaluators now create EvidenceContainers rather than dataframes. These EvidenceContainers are essentially dataframes with extended functionality. Namely, the form of the dataframe is validated, and the dataframe can be converted to evidence which can be exported to the Responsible AI Platform
    • All reporting has been removed
    • A new survival analysis evaluator has been added
    • "threshold metrics" have been added, allowing the performance and modelfairness evaluators to calculate analyses like ROC curves.
    • Logging has been extended and improved.

    What's Changed Continued

    • changed to _setup scheme from call by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/169
    • changes by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/171
    • Release/1.0/automation test by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/174
    • Add governance module to 1.0.0 branch by @aproxacs in https://github.com/credo-ai/credoai_lens/pull/173
    • feat(): export assessment to file by @aproxacs in https://github.com/credo-ai/credoai_lens/pull/176
    • Release/1.0/create tests by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/178
    • Can register without policy_pack_key by @aproxacs in https://github.com/credo-ai/credoai_lens/pull/179
    • Feat/governance in lens by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/175
    • Ranking fairness evaluator by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/177
    • Feat/governance ingestion by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/180
    • Updated pandas-profiling version by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/181
    • Fix/183 probability dependent metrics do not run not added to pipeline by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/184
    • update predict_proba when binary classification in sklearn framework by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/185
    • Release/1.0.0 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/186
    • Feat/len faq by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/187
    • Removed NLP Scripts by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/188
    • Fixed notebook error by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/190
    • fixed processing and validation that were somehow lost by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/191
    • Fixed 5 minutes quick tutorial by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/194
    • Feat/thresh varying evidence by @esherman-credo in https://github.com/credo-ai/credoai_lens/pull/189
    • Refact/remove prepare results by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/193
    • fixed bugs with fairness after merging by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/196
    • Ian/descriptive gov response by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/197
    • Make privacy python3.8 compatible by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/198
    • Feat/survival eval by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/199
    • Release/1.0.0 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/200

    New Contributors

    • @esherman-credo made their first contribution in https://github.com/credo-ai/credoai_lens/pull/184

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.2.1...v1.0.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Sep 12, 2022)

    What's Changed

    • Feat/privacy att inf by @fabrizio-credo in https://github.com/credo-ai/credoai_lens/pull/164
    • Release/0.2.0 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/166
    • Fixed quickstart demo by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/167
    • feat(): _register_model returns status string by @aproxacs in https://github.com/credo-ai/credoai_lens/pull/168
    • Release/0.2.1 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/170

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Aug 23, 2022)

    What's Changed

    • Feat/new credo data by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/162
      • CredoData has been overhauled. Rather than passing a dataframe, now X, y and/or sensitive features are passed directly
      • This change allows non-dataframe X's to be used (arrays or tensors), and brings the experience more in line with other ML flows
      • Breaking change! CredoData objects will need to be updated
    • bugfixes by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/165
    • AttributeInference Attack added to privacy module

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.1.7...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.7(Aug 5, 2022)

    What's Changed

    • hotfix by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/144
    • Release/0.1.6 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/152
    • reformatted everything by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/155
    • Feat/munichre register by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/154
    • Feat/change sensitive feature implementation by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/153
    • fix pandas_profiling version to 3.1.0 by @aproxacs in https://github.com/credo-ai/credoai_lens/pull/156
    • Add CredoApiClient by @aproxacs in https://github.com/credo-ai/credoai_lens/pull/157
    • Upgraded privacy and security modules by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/159
    • Release/0.1.7 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/160

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.1.6...v0.1.7

    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Jun 29, 2022)

    What's Changed

    • Release/0.1.3 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/139
    • Release/0.1.4 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/142
    • changed how errors with assessment plan are handled by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/143
    • hotfix by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/144

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.1.4...v0.1.5

    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Jun 15, 2022)

    Topline summary

    • Parts of Adversarial Robustness Toolbox functionality incorporated
      • Security module with evasion and extraction attacks
      • Privacy Module with membership inference attacks
    • Requirements expanded to route models/datasets better
    • Reporting functionality drastically changed

    What's Changed

    • Release 0.1.2 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/128
    • Merge pull request #128 from credo-ai/release_0.1.2 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/129
    • Language generator update by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/131
    • Feat/new reporting by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/130
    • Security module by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/134
    • Feat/requirements proxy by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/135
    • Bugfix/bugs by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/136
    • fixed concatenation issue leading to incorrect infographic aspects by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/137
    • changed assess_at at back by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/138

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.12...v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • v0.12(May 23, 2022)

    • Updates to endpoints to interact with updated Governance Platform
    • Privacy module added that performs membership inference attacks
    • Labeling of metrics
    • Bug fixes

    What's Changed

    • Register validation datasets to model-use-case by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/103
    • Main by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/105
    • Regression assessment by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/107
    • Updated and rerun the demo by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/106
    • Bugfix/register by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/109
    • fixed reporter overwriting bug by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/111
    • Metrics page by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/110
    • Regression reporting by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/108
    • Added percentage labels to data balance plot by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/114
    • Feat/labels by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/113
    • changed pred_fun/prob_fun -> predict/predict_proba. Made sklearn infe… by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/115
    • Regression features by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/112
    • Ian/integration updates by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/116
    • Ian/misc updates by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/118
    • Metric search exception handling by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/117
    • Added training data to CredoAssessment by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/119
    • added basic computational efficiency inference module by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/121
    • removed duplicatations when performance and fairness are used together by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/123
    • Ian/installation direction improvements by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/122
    • Enabling multiple sensitive features support by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/99
    • Requirements update by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/126
    • Feat/new endpoint by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/124
    • Feat/policy checklist by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/125
    • Privacy module by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/127

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.1.1...v0.12

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 19, 2022)

    What's Changed

    • New release by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/82
    • Release/0.1.0 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/93
    • Main by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/94
    • fix: added better error reporting, fixed error with dataset fairness by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/95
    • Feat/training_validation_dataset by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/96
    • Dataset profiling module by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/97
    • fixed issue related to pandas profiler setting matplotlib backend by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/98
    • fixed bug with dataset profiler and empty prepared results by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/100
    • Feat/metrics wo lens by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/101
    • fixed bug where lens wasn't dealing with assessments that don't use data by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/102
    • Release/0.1.1 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/104

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.1.0...v0.1.1

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 18, 2022)

    What's Changed

    • Ian/circular import bug by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/61
    • fixed ylabel setting by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/62
    • Ian/nlp report improv by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/49
    • added visualization updates for all assessments. Split out from PR #63 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/65
    • Data assess doc by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/64
    • Ian/new report exports by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/63
    • Ian/report content by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/67
    • fixed bug with docs and added dev mode by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/68
    • fixed bug with demo where credodata wasn't updated by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/69
    • updated dataset preparation to work by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/70
    • Ian/cleanup by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/72
    • Ian/api updates by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/71
    • Fix async notebook execution by using reentrant asyncio by @credo-eli in https://github.com/credo-ai/credoai_lens/pull/73
    • Doc updates by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/75
    • Ian/metric constant restructure by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/74
    • Ian/regression reporter by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/76
    • fixed import by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/77
    • fixed binary classification demo by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/78
    • small visual fix by @credo-eli in https://github.com/credo-ai/credoai_lens/pull/79
    • Ian/binaryclass fix by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/80
    • release script by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/83
    • update copy by @credo-eli in https://github.com/credo-ai/credoai_lens/pull/84
    • Ian/dogfooding changes by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/85
    • Ian/performance by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/81
    • change check condition for empty standardized group diffs by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/87
    • removed unnecessary values when downloading json by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/86
    • Ian/dogfooding updates by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/88
    • restricted labeling based dataset metrics if there are too many label… by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/89
    • Ian/changes for rai by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/90
    • Ian/changes before release by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/91

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.0.5...v0.1.0

    Source code(tar.gz)
    Source code(zip)
  • v0.0.6(Mar 12, 2022)

    What's Changed

    • Ian/circular import bug by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/61
    • fixed ylabel setting by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/62
    • Ian/nlp report improv by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/49
    • added visualization updates for all assessments. Split out from PR #63 by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/65
    • Data assess doc by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/64
    • Ian/new report exports by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/63
    • Ian/report content by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/67
    • fixed bug with docs and added dev mode by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/68
    • fixed bug with demo where credodata wasn't updated by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/69
    • updated dataset preparation to work by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/70
    • Ian/cleanup by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/72
    • Ian/api updates by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/71
    • Fix async notebook execution by using reentrant asyncio by @credo-eli in https://github.com/credo-ai/credoai_lens/pull/73
    • Doc updates by @amrasekh in https://github.com/credo-ai/credoai_lens/pull/75
    • Ian/metric constant restructure by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/74
    • Ian/regression reporter by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/76
    • fixed import by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/77
    • fixed binary classification demo by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/78
    • small visual fix by @credo-eli in https://github.com/credo-ai/credoai_lens/pull/79
    • Ian/binaryclass fix by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/80
    • release script by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/83
    • New release by @IanAtCredo in https://github.com/credo-ai/credoai_lens/pull/82

    Full Changelog: https://github.com/credo-ai/credoai_lens/compare/v0.0.5...v0.0.6

    Source code(tar.gz)
    Source code(zip)
Owner
Credo AI
AI governance and risk management to deliver responsible AI at scale.
Credo AI
Neural Nano-Optics for High-quality Thin Lens Imaging

Neural Nano-Optics for High-quality Thin Lens Imaging Project Page | Paper | Data Ethan Tseng, Shane Colburn, James Whitehead, Luocheng Huang, Seung-H

Ethan Tseng 39 Dec 5, 2022
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

VITA 161 Jan 2, 2023
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

null 9 Jan 12, 2022
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

null 419 Jan 3, 2023
🎯 A comprehensive gradient-free optimization framework written in Python

Solid is a Python framework for gradient-free optimization. It contains basic versions of many of the most common optimization algorithms that do not

Devin Soni 565 Dec 26, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Scaleout 75 Nov 9, 2022
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 23 Oct 19, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python >=3.8.0 Pytorch >=1.7.1 Usage wit

null 7 Oct 13, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

null 144 Dec 30, 2022
MRQy is a quality assurance and checking tool for quantitative assessment of magnetic resonance imaging (MRI) data.

Front-end View Backend View Table of Contents Description Prerequisites Running Basic Information Measurements User Interface Feedback and usage Descr

Center for Computational Imaging and Personalized Diagnostics 58 Dec 2, 2022
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

Kunal Wadhwa 2 Jan 5, 2022
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

?? Collection of Kaggle Solutions and Ideas ??

Farid Rashidi 2.3k Jan 8, 2023
FAMIE is a comprehensive and efficient active learning (AL) toolkit for multilingual information extraction (IE)

FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction

null 18 Sep 1, 2022
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains (IJCV submission)

wsss-analysis The code of: A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains, arXiv pre-print 2019 paper.

Lyndon Chan 48 Dec 18, 2022