Transform-Invariant Non-Negative Matrix Factorization

Overview

Flake8 Linter Pylint Linter Pytest and Coverage Build Documentation Publish to PyPI Open in Streamlit

Logo

Transform-Invariant Non-Negative Matrix Factorization

A comprehensive Python package for Non-Negative Matrix Factorization (NMF) with a focus on learning transform-invariant representations.

The packages supports multiple optimization backends and can be easily extended to handle application-specific types of transforms.

General Introduction

A general introduction to Non-Negative Matrix Factorization and the purpose of this package can be found on the corresponding GitHub Pages.

Installation

For using this package, you will need Python version 3.7 (or higher). The package is available via PyPI.

Installation is easiest using pip:

pip install tnmf

Demos and Examples

The package comes with a streamlit demo and a number of examples that demonstrate the capabilities of the TNMF model. They provide a good starting point for your own experiments.

Online Demo

Without requiring any installation, the demo is accessible via streamlit sharing.

Local Execution

Once the package is installed, the demo and the examples can be conveniently executed locally using the tnmf command:

  • To execute the demo, run tnmf demo.
  • A specific example can be executed by calling tnmf example .

To show the list of available examples, type tnmf example --help.

License

Copyright (c) 2021 Merck KGaA, Darmstadt, Germany

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

The full text of the license can be found in the file LICENSE in the repository root directory.

Contributing

Contributions to the package are always welcome and can be submitted via a pull request. Please note, that you have to agree to the Contributor License Agreement to contribute.

Working with the Code

To checkout the code and set up a working environment with all required Python packages, execute the following commands:

git checkout https://github.com/emdgroup/tnmf.git ./tnmf
cd tmnf
python3 -m virtualenv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

Now, you should be able to execute the unit tests by calling pytest to verify that the code is running as expected.

Pull Requests

Before creating a pull request, you should always try to ensure that the automated code quality and unit tests do not fail. This section explains how to run them locally to understand and fix potential issues.

Code Style and Quality

Code style and quality are checked using flake8 and pylint. To execute them, change into the repository root directory, run the following commands and inspect their output:

flake8
pylint tnmf

In order for a pull request to be accaptable, no errors may be reported here.

Unit Tests

Automated unit tests reside inside the folder tnmf/tests. They can be executed via pytest by changing into the repository root directory and running

pytest

Debugging potential failures from the command line might be cumbersome. Most Python IDEs, however, also support pytest natively in their debugger. Again, for a pull request to be acceptable, no failures may be reported here.

Code Coverage

Code coverage in the unit tests is measured using coverage. A coverage report can be created locally from the repository root directory via

coverage run
coverage combine
coverage report

This will output a concise table with an overview of python files that are not fully covered with unit tests along with the line numbers of code that has not been executed. A more detailed, interactive report can be created using

coverage html

Then, you can open the file htmlcov/index.html in a web browser of your choice to navigate through code annotated with coverage data. Required overall coverage to is configured in setup.cfg, under the key fail_under in section [coverage:report].

Building the Documentation

To build the documentation locally, change into the doc subdirectory and run make html. Then, the documentation resides at doc\_build\html\index.html.

Comments
  • Inhibition

    Inhibition

    Hi @AdrianSosic, This PR adds inhibition essentially identical to how it was done in the old implementation (except for that this one is supporting different inhibition kernel sizes in the different directions, which is in line with support for non-square atoms).

    What do you think? - Maybe, adding cross-atom inhibition might make sense, but this could also be a follow-up project as parametrization, etc. would still have to be discussed...

    opened by dasmy 1
  • pytorch-backend: only set requires_grad=True where really necessary

    pytorch-backend: only set requires_grad=True where really necessary

    The W and H tensor that is visible outside the backend does not need to support gradient computation. Instead, we only need this locally (ans only for either H or W).

    opened by dasmy 1
  • add a CITATION.cff

    add a CITATION.cff

    See https://academia.stackexchange.com/a/172780, https://citation-file-format.github.io, https://github.com/citation-file-format/citation-file-format/blob/1.1.0/README.md#software-citation-metadata-required

    example: https://github.com/emdgroup/brain_waves_for_planning_problems/blob/main/CITATION.cff

    opened by dasmy 0
  • Redesign fit interfaces

    Redesign fit interfaces

    The current setting how the different fitting interfaces (batch, minibatch, stream) are implemented has several problems:

    • The variable naming is not consistent / misleading but cannot be changed (e.g. the streaming method also processes minibatches, but the corresponding kwargs cannot be renamed as such because the higher level fit function uses the names to differentiate between the methods).
    • Switching between the different functions is not straightforward (e.g. it requires adding an additional algorithm kwarg), which requires if-else logic in the tests.

    A potential solution might be to create a higher-level abstract Algorithm class that has the current MiniBatchAlgorithm as a subclass together with others that cover the batch and streaming scenario.

    enhancement 
    opened by AdrianSosic 0
  • Inconsistent flake8 errors

    Inconsistent flake8 errors

    I am getting some flake8 errors locally that do not show up in the Github actions:

    ./build/lib/tnmf/tests/test_init.py:10:5: F403 'from tnmf import *' used; unable to detect undefined names
        from tnmf import *
        ^
    ./build/lib/tnmf/tests/test_init.py:10:5: F401 'tnmf.*' imported but unused
        from tnmf import *
        ^
    1     F401 'tnmf.*' imported but unused
    1     F403 'from tnmf import *' used; unable to detect undefined names
    2
    
    opened by AdrianSosic 0
  • Streamlit sharing demo crashes for large data

    Streamlit sharing demo crashes for large data

    The streamlit sharing demo simply crashes when exceeding the memory resources of the provided machine. Maybe we can identify during runtime if the demo is executed locally or in the cloud. In the latter case, we could then show a hint to the user that this is the "online version" of the demo, which is running with limited resources, and prevent the execution for settings that would lead to a crash.

    opened by AdrianSosic 0
  • Inhibition based on orthogonality of partial reconstruction

    Inhibition based on orthogonality of partial reconstruction

    Currently, the inhibition gradient is computed in a way that it suppresses neighboring activations. Intuitively, we might instead want to suppress activations that lead to overlapping atoms in the reconstruction. Thus, another inhibition mode might make sense, that ensures pairwise orthogonality of the resulting partial reconstructions.

    enhancement 
    opened by dasmy 0
  • Inhibition does not consider reconstruction mode

    Inhibition does not consider reconstruction mode

    Presumably, the convolution, that is used to compute the inhibition gradient, does not consider the reconstruction mode. Potentially, this is not correct: At least intuitively, I would expect that for circular boundary conditions an activation at a border should suppress activations at the opposite border.

    @AdrianSosic : What do you think?

    bug question 
    opened by dasmy 1
Import, connect and transform data into Excel

xlwings_query Import, connect and transform data into Excel. Description The concept is to apply data transformations to a main query object. When the

George Karakostas 1 Jan 19, 2022
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Emmanuel Boateng Sifah 1 Jan 19, 2022
A Python 3 library making time series data mining tasks, utilizing matrix profile algorithms

MatrixProfile MatrixProfile is a Python 3 library, brought to you by the Matrix Profile Foundation, for mining time series data. The Matrix Profile is

Matrix Profile Foundation 302 Dec 29, 2022
This is REST-API for Indonesian Text Summarization using Non-Negative Matrix Factorization for the algorithm to summarize documents and FastAPI for the framework.

Indonesian Text Summarization Using FastAPI This is REST-API for Indonesian Text Summarization using Non-Negative Matrix Factorization for the algorit

Viqi Nurhaqiqi 2 Nov 3, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 3, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 2.8k Feb 12, 2021
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 8, 2023
Implementation of SSMF: Shifting Seasonal Matrix Factorization

SSMF Implementation of SSMF: Shifting Seasonal Matrix Factorization, Koki Kawabata, Siddharth Bhatia, Rui Liu, Mohit Wadhwa, Bryan Hooi. NeurIPS, 2021

Koki Kawabata 9 Jun 10, 2022
Hough Transform and Hough Line Transform Using OpenCV

Hough transform is a feature extraction method for detecting simple shapes such as circles, lines, etc in an image. Hough Transform and Hough Line Transform is implemented in OpenCV with two methods; the Standard Hough Transform and the Probabilistic Hough Line Transform.

Happy  N. Monday 3 Feb 15, 2022
Companion "receiver" to matrix-appservice-webhooks for [matrix].

Matrix Webhook Receiver Companion "receiver" to matrix-appservice-webhooks for [matrix]. The purpose of this app is to listen for generic webhook mess

Kim Brose 13 Sep 29, 2022
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

null 44 Sep 15, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

null 1k Dec 24, 2022
A Library for Field-aware Factorization Machines

Table of Contents ================= - What is LIBFFM - Overfitting and Early Stopping - Installation - Data Format - Command Line Usage - Examples -

null 1.6k Dec 5, 2022
Sparse Beta-Divergence Tensor Factorization Library

NTFLib Sparse Beta-Divergence Tensor Factorization Library Based off of this beta-NTF project this library is specially-built to handle tensors where

Stitch Fix Technology 46 Jan 8, 2022
Factorization machines in python

Factorization Machines in Python This is a python implementation of Factorization Machines [1]. This uses stochastic gradient descent with adaptive re

Corey Lynch 892 Jan 3, 2023
fastFM: A Library for Factorization Machines

Citing fastFM The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citat

null 1k Dec 24, 2022
TensorFlow implementation of an arbitrary order Factorization Machine

This is a TensorFlow implementation of an arbitrary order (>=2) Factorization Machine based on paper Factorization Machines with libFM. It supports: d

Mikhail Trofimov 785 Dec 21, 2022
Neural Factorization of Shape and Reflectance Under An Unknown Illumination

NeRFactor [Paper] [Video] [Project] This is the authors' code release for: NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown I

Google 283 Jan 4, 2023
TuckER: Tensor Factorization for Knowledge Graph Completion

TuckER: Tensor Factorization for Knowledge Graph Completion This codebase contains PyTorch implementation of the paper: TuckER: Tensor Factorization f

Ivana Balazevic 296 Dec 6, 2022