A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

Overview

MCILBoost

Project | CVPR Paper | MIA Paper
Contact: Jun-Yan Zhu (junyanz at cs dot cmu dot edu)

Overview

This is the authors' implementation of MCIL-Boost method described in:
[1] Multiple Clustered Instance Learning for Histopathology Cancer Image Segmentation, Clustering, and Classification.
Yan Xu*, Jun-Yan Zhu*, Eric Chang, and Zhuowen Tu (*equal contribution)
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.

[2] Weakly Supervised Histopathology Cancer Image Segmentation and Classification
Yan Xu, Jun-Yan Zhu, Eric I-Chao Chang, Maode Lai, and Zhuowen Tu
In Medical Image Analysis, 2014.

Please cite our papers if you use our code for your research.

This package consists of the following two multiple-instance learning (MIL) methods:

  • MIL-Boost [Viola et al. 2006]: set c = 1
  • MCIL-Boost [1] [2]: set c > 1

The core of this package is a command-line interface written in C++. Various Matlab helper functions are provided to help users easily train/test MCIL-Boost model, perform cross-validation, and evaluate the performance.

System Requirement

  • Linux and Windows.
  • For Linux, the code is compiled by gcc 4.8.2 under Ubuntu 14.04.

Installation

  • Download and unzip the code.
    • For Linux users, type "chmod +x MCILBoost".
  • Open Matlab and run "demoToy.m".
  • To use the command-line interface, see "Command Usage".
  • To use Matlab functions, see "Matlab helper functions"; You can modify "SetParamsToy.m" and "demoToy.m" to run your own experiments.

Quick Examples

(Windows: MCILBoost.exe; Linux: ./MCILBoost)
An example for training:
MCILBoost.exe -v 2 -t 0 -c 2 -n 150 -s 0 -r 20 toy.data toy.model
An example for testing:
MCILBoost.exe -v 2 -t 1 -c 2 toy.data toy.model toy.result

Command Usage ([ ]: options)

MCILBoost.exe [-v verbose] [-t mode] [-c #clusters] [-n #weakClfs] [-s softmax] data_file model_file [result_file] (No need to specifiy c, n, s, r for test as the program will copy these parameters from the model_file)

-v verbose: shows details about the runtime output (default = 1) 0 -- no output 1 -- some output 2 -- more output

-t mode: set the training mode (default=0) 0 -- train a model 1 -- test a model

-c #clusters: set the number of clusters in positive bags (default = 1) c = 1 -- train a MIL-Boost model c > 1 -- train a MCIL-Boost model with multiple clusters

-n #weakClfs: set the maximum number of weak classifiers (default = 150)

-s softmax: set the softmax type: (default s = 0) 0 -- GM 1 -- LSE

-r exponent: set the exponent used in GM and LSE (default r = 20)

data_file: set the path for input data.

model_file: set the path for the model file.

result_file: set the path for result file. If result_file is not specified, result_file = data_file + '.result'

Matlab helper functions

  • MCILBoost.m: main entry function: model training/testing, and cross-validation.
  • SetParams.m: Set parameters for MCILBoost.m. You need to modify this file to run your own experiment.
  • TrainModel.m: train a model, call MCIL-Boost command line.
  • TestModel.m: test a model, call MCIL-Boost command line.
  • CrossValidate.m: split the data into n-fold, perform n-fold cross-validation, and report performance.
  • ReadData.m: read Matlab data from a text file.
  • WriteData.m: write Matlab data to a text file.
  • ReadResult.m: read Matlab result data from a text file.
  • MeasureResult.m: evaluate performance in terms of accuracy and auc (area under the curve).
  • AUC: compute the area under ROC curve given prediction and ground truth labels.
  • demoToy.m: demo script for toy data.
  • SetParamsToy.m: set parameters for demoToy.
  • demo1.m: demo script for Fox, Tiger, Elephant experiment.
  • SetParamsDemo1.m: set parameters for demo1.
  • demo2.m: demo script for SIVAL experiment.
  • SetParamsDemo2.m: set parameters for demo2.

Summary of Benchmark Results

  • I provide two scripts for running experiments on publicly available MIL benchmarks.
    • "demo1.m": experiments on Fox, Tiger, Elephant dataset.
      The MIL-Boost achieved 0.61 (Fox), 0.81 (Tiger), 0.82 (Elephant) on 10-fold cross-validation over 10 runs.
    • "demo2.m": experiments on SIVAL dataset. There are 180 positive bags (3 clusters), and 180 negative bags. While multiple clusters appear in positive bags, MCIL-Boost works better than MIL-Boost does.
      MIL-Boost (c=1): mean_acc = 0.742, mean_auc = 0.824
      MCIL-Boost (c=3): mean_acc = 0.879, mean_auc = 0.944
  • Note: See "demo1.m" and "demo2.m" for details.

Input Format

  • Note: You can use Matlab function "ReadData.m" and "WriteData.m" to read/write Matlab data from/to the text file.
  • Description: the input format is similar to the format used in LIBSVM and MILL package. The software also supports a sparse format. In the first line, you first need to specify the number of all instances, and the number of feature dimensions. Each line represents one instance, which has an instance id, bag id, and the label id (>= 1 for positive bags, and 0 for negative bags). Each feature value is represented as a : pair where is the index of the feature (starting from 1)
  • Format:
    : : : : ...
    : : : : ...
  • Example: A toy example that contains two negative bags and two positive bags. (see "toy.data") The negative instance is always (0, 0, 0) while there are two clusters of positive instances (0, 1, 0) and (0, 0, 1)
    8 3
    0:0:0 1:0 2:0 3:0
    1:0:0 1:0 2:0 3:0
    2:1:0 1:0 2:0 3:0
    3:1:0 1:0 2:0 3:0
    4:2:1 1:0 2:1 3:0
    5:2:1 1:0 2:0 3:0
    6:3:1 1:0 2:0 3:1
    7:3:1 1:0 2:0 3:0

Output Format

  • Note: You can use Matlab function "ReadResult.m" to load the Matlab data from the result file.

  • Description: The software outputs four kinds of predictions (see more details in the paper):

    • overall bag-level prediction p_i (the probability of the bag x_i being positive bag)
    • cluster-wise bag-level prediction p_i^k (the probability of the bag x_i belonging to k-th cluster)
    • overall instance-level prediction p_{ij} (the probability of the instance x_{ij} being positive instance)
    • cluster-wise instance-level prediction p_{ij}^k (the probability of the instance x_{ij} belonging to the k-th cluster)
    • In the first line, the software outputs the number of bags, and the number of clusters. Then for each bag, the software outputs the bag-level information and prediction (bag id, number of instances, ground truth label, number of clusters, and p_i).The software also outputs the bag-level prediction for each cluster (cluster id and prediction p_i^k for each cluster). Then for each instance, the software outputs the instance-level prediction (instance id and prediction p_{ij}) and instance-level prediction for each cluster (cluster_id and prediction p_{ij}^k)
  • Format:
    #bag= #cluster=
    bag_id= #insts= label= #cluster= pred=
    cluster_id= pred= cluster_id= pred= ...
    inst_id= pred= cluster_id= pred= cluster_id= pred= inst_id= pred= cluster_id= pred= cluster_id= pred= ...
    ...

  • Example: The output of the toy example:
    #bags=4 #clusters=2
    bag_id=0 #insts=2 label=0 #clusters=2 pred=0
    cluster_id=0 pred=0 cluster_id=1 pred=0
    inst_id=0 pred=0 cluster_id=0 pred=0 cluster_id=1 pred=0
    inst_id=1 pred=0 cluster_id=0 pred=0 cluster_id=1 pred=0
    bag_id=1 #insts=2 label=0 #clusters=2 pred=0
    cluster_id=0 pred=0 cluster_id=1 pred=0
    inst_id=0 pred=0 cluster_id=0 pred=0 cluster_id=1 pred=0
    inst_id=1 pred=0 cluster_id=0 pred=0 cluster_id=1 pred=0
    bag_id=2 #insts=2 label=1 #clusters=2 pred=1
    cluster_id=0 pred=1 cluster_id=1 pred=0
    inst_id=0 pred=1 cluster_id=0 pred=1 cluster_id=1 pred=0
    inst_id=1 pred=0 cluster_id=0 pred=0 cluster_id=1 pred=0
    bag_id=3 #insts=2 label=1 #clusters=2 pred=1
    cluster_id=0 pred=0 cluster_id=1 pred=1
    inst_id=0 pred=1 cluster_id=0 pred=0 cluster_id=1 pred=1
    inst_id=1 pred=0 cluster_id=0 pred=0 cluster_id=1 pred=0

    Credit

    Part of this code is based on the work by Piotr Dollar and Boris Babenko.

You might also like...
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library,  for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

House_prices_kaggle - Predict sales prices and practice feature engineering, RFs, and gradient boosting
House_prices_kaggle - Predict sales prices and practice feature engineering, RFs, and gradient boosting

House Prices - Advanced Regression Techniques Predicting House Prices with Machine Learning This project is build to enhance my knowledge about machin

Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.
This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)

Introduction Pytorch implementation of Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Expert. | paper Song Park1

Code of paper Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification.

Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification We provide the codes for repr

A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

An algorithm study of the 6th iOS 10 set of Boost Camp Web Mobile
An algorithm study of the 6th iOS 10 set of Boost Camp Web Mobile

알고리즘 스터디 🔥 부스트캠프 웹모바일 6기 iOS 10조의 알고리즘 스터디 입니다. 개인적인 사정 등으로 S034, S055만 참가하였습니다. 스터디 목적 상진: 코테 합격 + 부캠끝나고 아침에 일어나기 위해 필요한 사이클 기완: 꾸준하게 자리에 앉아 공부하기 +

 Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Comments
  • Data Input

    Data Input

    Really enjoying this code! How can you go about creating your own data? I have image patches containing lesions and patches not containing lesions. I'm a bit confused on how to format it?

    Thanks for sharing!

    EDIT: closed because I figured it out :)

    opened by emmapead 0
  • Fix the cross validation GenerateFoldIds function bug

    Fix the cross validation GenerateFoldIds function bug

    Hi. In my opinion, the orginal version is not exactly cross validation in some cases. e.g. 44 pos samples and 56 neg samples. 100 samples in total. The last fold will contain 94 in training and 6 in testing.

    Great algorithms though. Thanks for sharing.

    opened by txWang 0
Owner
Jun-Yan Zhu
Understanding and creating pixels.
Jun-Yan Zhu
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

InsGen - Data-Efficient Instance Generation from Instance Discrimination Data-Efficient Instance Generation from Instance Discrimination Ceyuan Yang,

GenForce: May Generative Force Be with You 93 Dec 25, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

null 1 Nov 27, 2021
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Dec 31, 2022