Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Overview

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning


InstallationDocsAboutPredictionFinetuningTasksGeneral TaskContributeCommunityWebsiteLicense

Stable API Documentation Status PyPI - Python Version PyPI Status PyPI Status Slack Discourse status license CI testing codecov


News

Read our launch blogpost


Installation

Pip / conda

pip install lightning-flash -U

Pip from source

# with git
pip install git+https://github.com/PytorchLightning/lightning-flash.git@master
# OR from an archive
pip install https://github.com/PyTorchLightning/lightning-flash/archive/master.zip

From source using setuptools

# clone flash repository locally
git clone https://github.com/PyTorchLightning/lightning-flash.git
cd lightning-flash
# install in editable mode
pip install -e .

What is Flash

Flash is a framework of tasks for fast prototyping, baselining, finetuning and solving business and scientific problems with deep learning. It is focused on:

  • Predictions
  • Finetuning
  • Task-based training

It is built for data scientists, machine learning practitioners, and applied researchers.

Scalability

Flash is built on top of PyTorch Lightning (by the Lightning team), which is a thin organizational layer on top of PyTorch. If you know PyTorch, you know PyTorch Lightning and Flash already!

As a result, Flash can scale up across any hardware (GPUs, TPUS) with zero changes to your code. It also has the best practices in AI research embedded into each task so you don't have to be a deep learning PhD to leverage its power :)

Predictions

# import our libraries
from flash.text import TextClassifier

# 1. Load finetuned task
model = TextClassifier.load_from_checkpoint("https://flash-weights.s3.amazonaws.com/text_classification_model.pt")

# 2. Perform inference from list of sequences
predictions = model.predict([
    "Turgid dialogue, feeble characterization - Harvey Keitel a judge?.",
    "The worst movie in the history of cinema.",
    "I come from Bulgaria where it 's almost impossible to have a tornado."
    "Very, very afraid"
    "This guy has done a great job with this movie!",
])
print(predictions)

Finetuning

First, finetune:

import flash
from flash import download_data
from flash.vision import ImageClassificationData, ImageClassifier

# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", 'data/')

# 2. Load the data
datamodule = ImageClassificationData.from_folders(
    train_folder="data/hymenoptera_data/train/",
    valid_folder="data/hymenoptera_data/val/",
    test_folder="data/hymenoptera_data/test/",
)

# 3. Build the model
model = ImageClassifier(num_classes=datamodule.num_classes, backbone="resnet18")

# 4. Create the trainer. Run once on data
trainer = flash.Trainer(max_epochs=1)

# 5. Finetune the model
trainer.finetune(model, datamodule=datamodule, strategy="freeze")

# 7. Save it!
trainer.save_checkpoint("image_classification_model.pt")

Then use the finetuned model

# load the finetuned model
classifier = ImageClassifier.load_from_checkpoint('image_classification_model.pt')

# predict!
predictions = classifier.predict('data/hymenoptera_data/val/bees/65038344_52a45d090d.jpg')
print(predictions)

Tasks

Flash is built as a collection of community-built tasks. A task is highly opinionated and laser-focused on solving a single problem well, using state-of-the-art methods.

Example 1: Image classification

Flash has an ImageClassification task to tackle any image classification problem.

View example To illustrate, Let's say we wanted to develop a model that could classify between ants and bees.

Here we classify ants vs bees.

import flash
from flash import download_data
from flash.vision import ImageClassificationData, ImageClassifier

# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", 'data/')

# 2. Load the data
datamodule = ImageClassificationData.from_folders(
    train_folder="data/hymenoptera_data/train/",
    valid_folder="data/hymenoptera_data/val/",
    test_folder="data/hymenoptera_data/test/",
)

# 3. Build the model
model = ImageClassifier(num_classes=datamodule.num_classes)

# 4. Create the trainer. Run once on data
trainer = flash.Trainer(max_epochs=1)

# 5. Train the model
trainer.finetune(model, datamodule=datamodule, strategy="freeze_unfreeze")

# 6. Test the model
trainer.test()

# 7. Predict!
predictions = model.predict([
    "data/hymenoptera_data/val/bees/65038344_52a45d090d.jpg",
    "data/hymenoptera_data/val/bees/590318879_68cf112861.jpg",
    "data/hymenoptera_data/val/ants/540543309_ddbb193ee5.jpg",
])
print(predictions)

To run the example:

python flash_examples/finetuning/image_classifier.py

Example 2: Text Classification

Flash has a TextClassification task to tackle any text classification problem.

View example To illustrate, say you wanted to classify movie reviews as positive or negative.
import flash
from flash import download_data
from flash.text import TextClassificationData, TextClassifier

# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/imdb.zip", 'data/')

# 2. Load the data
datamodule = TextClassificationData.from_files(
    train_file="data/imdb/train.csv",
    valid_file="data/imdb/valid.csv",
    test_file="data/imdb/test.csv",
    input="review",
    target="sentiment",
    batch_size=512
)

# 3. Build the model
model = TextClassifier(num_classes=datamodule.num_classes)

# 4. Create the trainer. Run once on data
trainer = flash.Trainer(max_epochs=1)

# 5. Fine-tune the model
trainer.finetune(model, datamodule=datamodule, strategy="freeze_unfreeze")

# 6. Test model
trainer.test()

# 7. Classify a few sentences! How was the movie?
predictions = model.predict([
    "Turgid dialogue, feeble characterization - Harvey Keitel a judge?.",
    "The worst movie in the history of cinema.",
    "I come from Bulgaria where it 's almost impossible to have a tornado."
    "Very, very afraid"
    "This guy has done a great job with this movie!",
])
print(predictions)

To run the example:

python flash_examples/finetuning/classify_text.py

Example 3: Tabular Classification

Flash has a TabularClassification task to tackle any tabular classification problem.

View example

To illustrate, say we want to build a model to predict if a passenger survived on the Titanic.

from pytorch_lightning.metrics.classification import Accuracy, Precision, Recall
import flash
from flash import download_data
from flash.tabular import TabularClassifier, TabularData

# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/titanic.zip", 'data/')

# 2. Load the data
datamodule = TabularData.from_csv(
    "./data/titanic/titanic.csv",
    test_csv="./data/titanic/test.csv",
    categorical_input=["Sex", "Age", "SibSp", "Parch", "Ticket", "Cabin", "Embarked"],
    numerical_input=["Fare"],
    target="Survived",
    val_size=0.25,
)

# 3. Build the model
model = TabularClassifier.from_data(datamodule, metrics=[Accuracy(), Precision(), Recall()])

# 4. Create the trainer. Run 10 times on data
trainer = flash.Trainer(max_epochs=10)

# 5. Train the model
trainer.fit(model, datamodule=datamodule)

# 6. Test model
trainer.test()

# 7. Predict!
predictions = model.predict("data/titanic/titanic.csv")
print(predictions)

To run the example:

python flash_examples/finetuning/tabular_data.py

A general task

Flash comes prebuilt with a task to handle a huge portion of deep learning problems.

import flash
from torch import nn, optim
from torch.utils.data import DataLoader, random_split
from torchvision import transforms, datasets
import pytorch_lightning as pl

# model
model = nn.Sequential(
    nn.Flatten(),
    nn.Linear(28 * 28, 128),
    nn.ReLU(),
    nn.Linear(128, 10)
)

# data
dataset = datasets.MNIST('./data_folder', download=True, transform=transforms.ToTensor())
train, val = random_split(dataset, [55000, 5000])

# task
classifier = flash.Task(model, loss_fn=nn.functional.cross_entropy, optimizer=optim.Adam)

# train
flash.Trainer().fit(classifier, DataLoader(train), DataLoader(val))

Infinitely customizable

Tasks can be built in just a few minutes because Flash is built on top of PyTorch Lightning LightningModules, which are infinitely extensible and let you train across GPUs, TPUs etc without doing any code changes.

import torch
import torch.nn.functional as F
from flash.core.classification import ClassificationTask

class LinearClassifier(ClassificationTask):
    def __init__(
        self,
        num_inputs,
        num_classes,
        loss_fn: Callable = F.cross_entropy,
        optimizer: Type[torch.optim.Optimizer] = torch.optim.SGD,
        metrics: Union[Callable, Mapping, Sequence, None] = [Accuracy()],
        learning_rate: float = 1e-3,
    ):
        super().__init__(
            model=None,
            loss_fn=loss_fn,
            optimizer=optimizer,
            metrics=metrics,
            learning_rate=learning_rate,
        )
        self.save_hyperparameters()

        self.linear = torch.nn.Linear(num_inputs, num_classes)

    def forward(self, x):
        return self.linear(x)

classifier = LinearClassifier()
...

When you reach the limits of the flexibility provided by tasks, then seamlessly transition to PyTorch Lightning which gives you the most flexibility because it is simply organized PyTorch.

Contribute!

The lightning + Flash team is hard at work building more tasks for common deep-learning use cases. But we're looking for incredible contributors like you to submit new tasks!

Join our Slack to get help becoming a contributor!

Community

For help or questions, join our huge community on Slack!

Citations

We’re excited to continue the strong legacy of opensource software and have been inspired over the years by Caffee, Theano, Keras, PyTorch, torchbearer, and fast.ai. When/if a paper is written about this, we’ll be happy to cite these frameworks and the corresponding authors.

License

Please observe the Apache 2.0 license that is listed in this repository. In addition the Lightning framework is Patent Pending.

Comments
  • Adding support for loading datasets and visualizing model predictions via FiftyOne

    Adding support for loading datasets and visualizing model predictions via FiftyOne

    What does this PR do?

    Integrates Lightning Flash with FiftyOne, the open source dataset and model analysis library!

    Loading FiftyOne data into Flash

    This PR adds FiftyOneDataSources for image/video classification, object detection, semantic segmentation, and image embedding tasks that load FiftyOne Datasets into Flash.

    Loading Flash predictions into FiftyOne

    This PR adds Serializer implementations that can convert classification/detection/segmentation model outputs into the appropriate FiftyOne label types so that they can be added to FiftyOne datasets and visualized.

    Note

    This PR requires a source install of FiftyOne on this branch https://github.com/voxel51/fiftyone/pull/1059 in order to function.

    git clone https://github.com/voxel51/fiftyone
    cd fiftyone
    git checkout --track origin/flash-video
    bash install.bash
    

    The above branch also contains a parallel integration that enables FiftyOne users to add predictions from any Flash model to their datasets 😄

    Points of discussion

    1. It'd be great if these examples could be integrated into the Flash documentation/README in the appropriate places 😄

    2. The new FiftyoneDataSource classes introduced in this PR require a label_field argument to specify which field of the FiftyOne dataset should be used as the label field. To enable this, we added **data_source_kwargs to Flash's processor interface. Perhaps there's a better way to support this?

    3. When serializing object detections, Flash models seem to return bounding boxes in absolute coordinates, but FiftyOne expects bounding boxes in relative coordinates. Is it possible for FiftyOneDetectionLabels to access the dimensions of the current image when serialize() is called? Perhaps using set_state() as is done for class labels? The current implementation requires fiftyone.utils.flash.normalize_detections() to be manually called to convert to relative coordinates for import into FiftyOne, but it would be much cleaner if this could be done natively within FiftyOneDetectionLabels...

    Basic patterns

    The following subsections show the basic patterns enabled by this integration. See the next section for concrete examples of each task type.

    Loading data from FiftyOne into Flash

    FiftyOne users can load their datasets into Flash Data Sources via the pattern below:

    from flash.image import ImageClassificationData
    
    import fiftyone as fo
    
    train_dataset = fo.Dataset.from_dir(
        "/path/to/train",
        fo.types.ImageClassificationDirectoryTree,
        label_field="ground_truth",
    )
    
    val_dataset = fo.Dataset.from_dir(
        "/path/to/val",
        fo.types.ImageClassificationDirectoryTree,
        label_field="ground_truth",
    )
    
    datamodule = ImageClassificationData.from_fiftyone(
        train_dataset=train_dataset,
        val_dataset=val_dataset,
        label_field="ground_truth",
    )
    

    Visualizing Flash predictions in FiftyOne

    Flash users can swap out the serializer on their model with the corresponding FiftyOne serializer for the task type, and then visualize their predictions in the FiftyOne App via the pattern below:

    from flash import Trainer
    from flash.core.classification import FiftyOneLabels
    from flash.core.integrations.fiftyone import visualize
    from flash.video import VideoClassificationData, VideoClassifier
    
    classifier = VideoClassifier.load_from_checkpoint(...)
    
    # Option 1: Generate predictions using a Trainer and datamodule
    datamodule = VideoClassificationData.from_folders(
        predict_folder="/path/to/folder",
        ...
    )
    trainer = Trainer()
    classifier.serializer = FiftyOneLabels(return_filepath=True)
    predictions = trainer.predict(classifier, datamodule=datamodule)
    
    session = visualize(predictions) # Launch FiftyOne
    
    # Option 2: Generate predictions from model using filepaths
    filepaths = ["list", "of", "filepaths"]
    predictions = classifier.predict(filepaths)
    classifier.serializer = FiftyOneLabels()
    
    session = visualize(predictions, filepaths=filepaths) # Launch FiftyOne
    

    Applying Flash models to FiftyOne datasets

    In addition to this PR, https://github.com/voxel51/fiftyone/pull/1059 adds a parallel integration in the FiftyOne library that enables FiftyOne users to add predictions from any Flash model to their datasets via the pattern below:

    from flash.image import ObjectDetector
    
    import fiftyone as fo
    import fiftyone.zoo as foz
    
    dataset = foz.load_zoo_dataset("quickstart", max_samples=10)
    
    model = ObjectDetector.load_from_checkpoint("https://flash-weights.s3.amazonaws.com/object_detection_model.pt")
    
    dataset.apply_model(model, label_field="predictions")
    
    session = fo.launch_app(dataset)
    

    Task examples

    The subsections below demonstrate both (a) FiftyOne dataset -> Flash, and (b) Flash predictions -> FiftyOne for each task type.

    Video classification

    from torch.utils.data.sampler import RandomSampler
    
    import flash
    from flash.core.classification import FiftyOneLabels
    from flash.core.data.utils import download_data
    from flash.video import VideoClassificationData, VideoClassifier
    
    import fiftyone as fo
    
    # 1. Download data
    download_data("https://pl-flash-data.s3.amazonaws.com/kinetics.zip")
    
    # 2. Load data into FiftyOne
    # Here we use different datasets for each split, but you can also
    # use views into the same dataset
    train_dataset = fo.Dataset.from_dir(
        "data/kinetics/train",
        fo.types.VideoClassificationDirectoryTree,
        label_field="ground_truth",
        max_samples=5,
    )
    
    val_dataset = fo.Dataset.from_dir(
        "data/kinetics/val",
        fo.types.VideoClassificationDirectoryTree,
        label_field="ground_truth",
        max_samples=5,
    )
    
    predict_dataset = fo.Dataset.from_dir(
        "data/kinetics/predict",
        fo.types.VideoDirectory,
        max_samples=5,
    )
    
    # 3. Finetune a model
    classifier = VideoClassifier.load_from_checkpoint(
      "https://flash-weights.s3.amazonaws.com/video_classification.pt",
      pretrained=False,
    )
    
    datamodule = VideoClassificationData.from_fiftyone(
        train_dataset=train_dataset,
        val_dataset=val_dataset,
        predict_dataset=predict_dataset,
        label_field="ground_truth",
        batch_size=8,
        clip_sampler="uniform",
        clip_duration=1,
        video_sampler=RandomSampler,
        decode_audio=False,
        num_workers=8,
    )
    
    trainer = flash.Trainer(max_epochs=1, fast_dev_run=1)
    trainer.finetune(classifier, datamodule=datamodule)
    trainer.save_checkpoint("video_classification.pt")
    
    # 4. Predict from checkpoint
    classifier = VideoClassifier.load_from_checkpoint(
      "https://flash-weights.s3.amazonaws.com/video_classification.pt",
      pretrained=False,
    )
    
    classifier.serializer = FiftyOneLabels()
    
    filepaths = predict_dataset.values("filepath")
    predictions = classifier.predict(filepaths)
    
    predict_dataset.set_values("predictions", predictions)
    
    # 5. Visualize in FiftyOne App
    session = fo.launch_app(predict_dataset)
    

    Image classification

    from itertools import chain
    
    import fiftyone as fo
    import fiftyone.zoo as foz
    
    from flash import Trainer
    from flash.core.classification import FiftyOneLabels
    from flash.core.finetuning import FreezeUnfreeze
    from flash.image import ImageClassificationData, ImageClassifier
    
    # 1. Load your FiftyOne dataset
    # Here we use views into one dataset, but you can also create a
    # different dataset for each split
    dataset = foz.load_zoo_dataset("cifar10", split="test", max_samples=40)
    train_dataset = dataset.shuffle(seed=51)[:20]
    test_dataset = dataset.shuffle(seed=51)[20:25]
    val_dataset = dataset.shuffle(seed=51)[25:30]
    predict_dataset = dataset.shuffle(seed=51)[30:40]
    
    # 2. Load the Datamodule
    datamodule = ImageClassificationData.from_fiftyone(
        train_dataset = train_dataset,
        test_dataset = test_dataset,
        val_dataset = val_dataset,
        predict_dataset = predict_dataset,
        label_field = "ground_truth",
        batch_size=4,
        num_workers=4,
    )
    
    # 3. Build the model
    model = ImageClassifier(
        backbone="resnet18",
        num_classes=datamodule.num_classes,
        serializer=FiftyOneLabels(),
    )
    
    # 4. Create the trainer
    trainer = Trainer(
        max_epochs=1,
        limit_train_batches=1,
        limit_val_batches=1,
    )
    
    # 5. Finetune the model
    trainer.finetune(
        model,
        datamodule=datamodule,
        strategy=FreezeUnfreeze(unfreeze_epoch=1),
    )
    
    # 6. Save it!
    trainer.save_checkpoint("image_classification_model.pt")
    
    # 7. Generate predictions
    model = ImageClassifier.load_from_checkpoint(
      "https://flash-weights.s3.amazonaws.com/image_classification_model.pt"
    )
    model.serializer = FiftyOneLabels()
    
    predictions = trainer.predict(model, datamodule=datamodule)
    
    predictions = list(chain.from_iterable(predictions)) # flatten batches
    
    # 8. Add predictions to dataset and analyze
    predict_dataset.set_values("flash_predictions", predictions)
    session = fo.launch_app(view=predict_dataset)
    

    Object detection

    from itertools import chain
    
    import fiftyone as fo
    import fiftyone.zoo as foz
    
    from flash import Trainer
    from flash.image import ObjectDetectionData, ObjectDetector
    from flash.image.detection.serialization import FiftyOneDetectionLabels
    
    # 1. Load your FiftyOne dataset
    # Here we use views into one dataset, but you can also create a
    # different dataset for each split
    dataset = foz.load_zoo_dataset("quickstart", max_samples=40)
    train_dataset = dataset.shuffle(seed=51)[:20]
    test_dataset = dataset.shuffle(seed=51)[20:25]
    val_dataset = dataset.shuffle(seed=51)[25:30]
    predict_dataset = dataset.shuffle(seed=51)[30:40]
    
    # 2. Load the Datamodule
    datamodule = ObjectDetectionData.from_fiftyone(
        train_dataset = train_dataset,
        test_dataset = test_dataset,
        val_dataset = val_dataset,
        predict_dataset = predict_dataset,
        label_field = "ground_truth",
        batch_size=4,
        num_workers=4,
    )
    
    # 3. Build the model
    model = ObjectDetector(
        model="retinanet",
        num_classes=datamodule.num_classes,
        serializer=FiftyOneDetectionLabels(),
    )
    
    # 4. Create the trainer
    trainer = Trainer(
        max_epochs=1,
        limit_train_batches=1,
        limit_val_batches=1,
    )
    
    # 5. Finetune the model
    trainer.finetune(model, datamodule=datamodule)
    
    # 6. Save it!
    trainer.save_checkpoint("object_detection_model.pt")
    
    # 7. Generate predictions
    model = ObjectDetector.load_from_checkpoint(
      "https://flash-weights.s3.amazonaws.com/object_detection_model.pt"
    )
    model.serializer = FiftyOneDetectionLabels()
    
    predictions = trainer.predict(model, datamodule=datamodule)
    
    predictions = list(chain.from_iterable(predictions)) # flatten batches
    
    # 8. Add predictions to dataset and analyze
    predict_dataset.set_values("flash_predictions", predictions)
    session = fo.launch_app(view=predict_dataset)
    

    Semantic segmentation

    from itertools import chain
    
    import fiftyone as fo
    import fiftyone.zoo as foz
    
    from flash import Trainer
    from flash.core.data.utils import download_data
    from flash.image import SemanticSegmentation, SemanticSegmentationData
    from flash.image.segmentation.serialization import FiftyOneSegmentationLabels
    
    # 1. Load your FiftyOne dataset
    # This is a Dataset with Semantic Segmentation Labels generated via CARLA
    self-driving simulator.
    # The data was generated as part of the Lyft Udacity Challenge.
    # More info here:
    https://www.kaggle.com/kumaresanmanickavelu/lyft-udacity-challenge
    download_data(
      "https://github.com/ongchinkiat/LyftPerceptionChallenge/releases/download/v0.1/carla-capture-20180513A.zip",
      "data/"
    )
    
    # Here we use views into one dataset, but you can also create a
    # different dataset for each split
    dataset = fo.Dataset.from_dir(
        dataset_dir = "data",
        data_path = "CameraRGB",
        labels_path = "CameraSeg",
        max_samples = 40,
        force_grayscale = True,
        dataset_type=fo.types.ImageSegmentationDirectory,
    )
    train_dataset = dataset.shuffle(seed=51)[:20]
    test_dataset = dataset.shuffle(seed=51)[20:25]
    val_dataset = dataset.shuffle(seed=51)[25:30]
    predict_dataset = dataset.shuffle(seed=51)[30:40]
    
    # 2. Load the Datamodule
    datamodule = SemanticSegmentationData.from_fiftyone(
        train_dataset = train_dataset,
        test_dataset = test_dataset,
        val_dataset = val_dataset,
        predict_dataset = predict_dataset,
        label_field = "ground_truth",
        batch_size=4,
        num_workers=4,
        num_classes=21,
    )
    
    # 3. Build the model
    model = SemanticSegmentation(
        backbone="resnet50",
        num_classes=datamodule.num_classes,
        serializer=FiftyOneSegmentationLabels(),
    )
    
    # 4. Create the trainer
    trainer = Trainer(
        max_epochs=1,
        fast_dev_run=1,
    )
    
    # 5. Finetune the model
    trainer.finetune(model, datamodule=datamodule, strategy="freeze")
    
    # 6. Save it!
    trainer.save_checkpoint("semantic_segmentation_model.pt")
    
    # 7. Generate predictions
    model = ObjectDetector.load_from_checkpoint(
      "https://flash-weights.s3.amazonaws.com/semantic_segmentation_model.pt"
    )
    model.serializer = FiftyOneSegmentationLabels()
    
    predictions = trainer.predict(model, datamodule=datamodule)
    
    predictions = list(chain.from_iterable(predictions)) # flatten batches
    
    # 8. Add predictions to dataset and analyze
    predict_dataset.set_values("flash_predictions", predictions)
    session = fo.launch_app(view=predict_dataset)
    

    Image embeddings

    import numpy as np
    import torch
    
    from flash.core.data.utils import download_data
    from flash.image import ImageEmbedder
    
    import fiftyone as fo
    import fiftyone.brain as fob
    
    # 1 Download data
    download_data(
        "https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip"
    )
    
    # 2 Load data into FiftyOne
    dataset = fo.Dataset.from_dir(
        "data/hymenoptera_data/test/",
        fo.types.ImageClassificationDirectoryTree,
    )
    
    # 3 Load model
    embedder = ImageEmbedder(backbone="swav-imagenet", embedding_dim=128)
    
    # 4 Generate embeddings
    filepaths = dataset.values("filepath")
    embeddings = np.stack(embedder.predict(filepaths))
    
    # 5 Visualize in FiftyOne App
    results = fob.compute_visualization(dataset, embeddings=embeddings)
    
    session = fo.launch_app(dataset)
    
    plot = results.visualize(labels="ground_truth.label")
    plot.show()
    

    Before submitting

    • [X] (This PR was discussed face-to-face) Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [X] Did you read the contributor guideline, Pull Request section?
    • [X] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [ ] Did you make sure to update the documentation with your changes?
    • [X] Did you write any new necessary tests? [not needed for typos/docs]
    • [X] Did you verify new and existing tests pass locally with your changes?
    • [X] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [X] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    opened by ehofesmann 23
  • QuestionAnsweringInputBase is returning incorrect number of samples in batch

    QuestionAnsweringInputBase is returning incorrect number of samples in batch

    🐛 Bug

    In case of long QA context the Huggingface tokenizer divides tokenized output in chunks. Which is expected and correct. But load_sample function in QuestionAnsweringInputBase is returning collated sample which results in arbitrary sized batches ignoring batch_size specified. This may result in cuda OOM and other problems.

    One sample per chunk is created instead of one sample per squad sample. It looks like the code tries to "utilize" all chunks even if they do not contain answer. Which might be useful but in this case IterableInput should be used. By default only one sample per squad sample should be returned and impossible answers ignored (unless not squad impossible answers).

    To Reproduce

    Steps to reproduce the behavior:

    Code sample

    datamodule = QuestionAnsweringData.from_squad_v2(
        train_file="/data/share/cuad/CUAD_v1/CUAD_v1.json",
        batch_size=2, max_source_length=4096, max_target_length=512 #2 samples per batch specified
    )
    
    model = QuestionAnsweringTask(backbone="google/bigbird-base-trivia-itc", 
                    max_answer_length=512)
    
    trainer = Trainer(max_epochs=3, gpus=1)
    trainer.fit(model, datamodule=datamodule) #this crashes because arbitrary sized batches are returned
    

    Here 2 samples per batch is requested. If sample context size > 4096 then multiple chunks are returned.

    e.g. if first context size is 5000 and second context size is 3000 then 3 samples will be yielded from QuestionAnsweringInputBase.

    Expected behavior

    Correct number of samples in batch is returned.

    Environment

    • PyTorch Version (e.g., 1.0): 1.10.2
    • OS (e.g., Linux): Linux
    • How you installed PyTorch (conda, pip, source): pip
    • Build command you used (if compiling from source):
    • Python version: 3.9
    • CUDA/cuDNN version: 11.2
    • GPU models and configuration: Tesla V100
    • Any other relevant information:

    Additional context

    Possible solutions: QuestionAnsweringInputBase should be based on IterableInput as number of samples is not known, or completely new iterable version is implemented separately.

    Or "classic" Input would remain but one sample per squad sample must be returned.

    bug / fix help wanted won't fix 
    opened by mfojtak 20
  • TypeError: list indices must be integers or slices, not DefaultDataKeys when training Object Detection Model

    TypeError: list indices must be integers or slices, not DefaultDataKeys when training Object Detection Model

    🐛 Bug

    I've spent days making the data augmentation work for Object Detection but errors keep poping up. I don't know if I'm reinventing the wheels or you are missing a lot in term data preparation/augmentation documentation for object detection. I'm about to give up...

    Following #409 (always not resolved) I've created a custom data augmentation transformation using albumentations. However it fails with a weird message when starting training (when we fix the error I can make a PR for integrating albumentations with pytorch lightning flash):

    File "train.py", line 93, in train
        trainer.finetune(model, datamodule=datamodule)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/flash/core/trainer.py", line 148, in finetune
        return super().fit(model, train_dataloader, val_dataloaders, datamodule)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
        self._run(model)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
        self.dispatch()
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
        self.accelerator.start_training(self)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
        self.training_type_plugin.start_training(trainer)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
        self._results = trainer.run_stage()
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
        return self.run_train()
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
        self.run_sanity_check(self.lightning_module)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
        self.run_evaluation()
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 962, in run_evaluation
        output = self.evaluation_loop.evaluation_step(batch, batch_idx, dataloader_idx)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 174, in evaluation_step
        output = self.trainer.accelerator.validation_step(args)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 226, in validation_step
        return self.training_type_plugin.validation_step(*args)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 322, in validation_step
        return self.model(*args, **kwargs)
      File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/usr/lib/python3/dist-packages/torch/nn/parallel/distributed.py", line 705, in forward
        output = self.module(*inputs[0], **kwargs[0])
      File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 57, in forward
        output = self.module.validation_step(*inputs, **kwargs)
      File "/home/ubuntu/.local/lib/python3.8/site-packages/flash/image/detection/model.py", line 179, in validation_step
        images, targets = batch[DefaultDataKeys.INPUT], batch[DefaultDataKeys.TARGET]
    TypeError: list indices must be integers or slices, not DefaultDataKeys
    

    Before that it was failing with RuntimeError: each element in list of batch should be of equal size but this torch vision tip of custom collate : lambda x:x "fixes" it https://github.com/pytorch/vision/issues/2624

    What is going on?

    To Reproduce

    
    import albumentations as A
    from albumentations.pytorch.transforms import ToTensorV2
    from PIL import Image
    import cv2
    
    import flash
    from flash.core.data.utils import download_data
    from flash.image import ObjectDetectionData, ObjectDetector
    from pytorch_lightning import seed_everything
    import numpy
    
    import logging
    logging.basicConfig(level=logging.DEBUG)
    logger = logging.getLogger(__name__)
    
    seed_everything(42)
    
       image_size = 1024
    
        train_transform = A.Compose(
            [
                A.Resize(height=image_size, width=image_size, p=1),
                A.OneOf([
                    A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=20,
                                         val_shift_limit=20, p=0.5),
                    A.RandomBrightnessContrast(brightness_limit=0.2,
                                               contrast_limit=0.2, p=0.5),
                ], p=0.9),
                A.ToGray(p=0.01),
                A.VerticalFlip(p=0.5),
                A.HorizontalFlip(p=0.5),
                A.ShiftScaleRotate(p=0.5),
                A.Cutout(num_holes=10, max_h_size=32, max_w_size=32, fill_value=0, p=0.5),
                ToTensorV2(p=1)
            ],
            p=1.0,
            bbox_params=A.BboxParams(
                format='pascal_voc',
                min_area=0,
                min_visibility=0,
                label_fields=['labels']
            )
        )
    
        valid_transform = A.Compose(
            [
                A.Resize(height=image_size, width=image_size, p=1),
                ToTensorV2(p=1)
            ],
            p=1.0,
            bbox_params=A.BboxParams(
                format='pascal_voc',
                min_area=0,
                min_visibility=0,
                label_fields=['labels']
            )
        )
    
        test_transform = A.Compose(
            [
                A.Resize(height=image_size, width=image_size, p=1),
                ToTensorV2(p=1)
            ],
            p=1.0,
            bbox_params=A.BboxParams(
                format='pascal_voc',
                min_area=0,
                min_visibility=0,
                label_fields=['labels']
            )
        )
    
        datamodule = ObjectDetectionData.from_coco(
            train_folder="data_coco/train",
            train_ann_file="data_coco/train/_annotations.coco.json",
            train_transform={
                'pre_tensor_transform': lambda sample: transform_using_albu(sample, train_transform),
                'collate' : lambda x: x
            },
            val_transform={
             'pre_tensor_transform': lambda sample: transform_using_albu(sample, valid_transform),
              'collate': lambda x: x
            },
            test_transform={
             'pre_tensor_transform': lambda sample: transform_using_albu(sample, test_transform),
             'collate': lambda x: x
           },
            val_split=0.2,
            batch_size=8,
            num_workers=4,
        )
    
        model = ObjectDetector(model="retinanet", backbone="resnet101", num_classes=datamodule.num_classes, fpn=True)
    
        # 4. Create the trainer
        trainer = flash.Trainer(max_epochs=1, gpus=2, accelerator='ddp', limit_train_batches=1, limit_val_batches=1, checkpoint_callback=True)
    
        # 5. Finetune the model
        trainer.finetune(model, datamodule=datamodule)
    
    
    def transform_using_albu(sample, train_transform):
            labels = sample['target']['labels']
            image = to_cv(sample['input'])
            transformed = train_transform(image=image, bboxes=sample['target']['boxes'], labels=sample['target']['labels'])
            trans_bboxes = [list(boxes) for boxes in transformed["bboxes"]]
            area = [calculate_area(boxes) for boxes in trans_bboxes]
            return {
                'input': transformed["image"],
                'target': {
                  'boxes': trans_bboxes,
                  'labels': labels,
                  'image_id': sample['target']['image_id'],
                  'area': area,
                  'iscrowd': [0 for _ in trans_bboxes]}
                }
    
    

    Environment

    • PyTorch Version: 1.8
    • OS (e.g., Linux): MacOS
    • How you installed PyTorch: pip
    • Python version: 3.7
    • CUDA/cuDNN version: 11
    • GPU models and configuration: 2 A 6000

    Additional context

    It is necessary to provide a clear and working example of augmenting and resizing images for object detection using torchvision transformers or albumentations.

    bug / fix help wanted 
    opened by hzitoun 18
  • feat: Add Detection Task

    feat: Add Detection Task

    What does this PR do?

    Add support for detection

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [ ] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [ ] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    task 
    opened by kaushikb11 17
  • TypeError: list indices must be integers or slices, not DefaultDataKey

    TypeError: list indices must be integers or slices, not DefaultDataKey

    🐛 Bug

    Hi guys, flash seems to expect data loaders to return dictionaries and not normal tuples like in 99% of the cases

    To Reproduce

    Steps to reproduce the behavior. Just run the sample code

    
    (venv) zuppif@blackbird:~/gust-torchvision$ /home/zuppif/gust-torchvision/venv/bin/python3 /home/zuppif/gust-torchvision/playground.py
    eehh
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    IPU available: False, using: 0 IPUs
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2]
    Traceback (most recent call last):
      File "/home/zuppif/gust-torchvision/playground.py", line 60, in <module>
        trainer.finetune(classifier, datamodule=dm, strategy="freeze")
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/flash/core/trainer.py", line 165, in finetune
        return super().fit(model, train_dataloader, val_dataloaders, datamodule)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
        self._run(model)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 912, in _run
        self._pre_dispatch()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 941, in _pre_dispatch
        self._log_hyperparams()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 970, in _log_hyperparams
        self.logger.save()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 48, in wrapped_fn
        return fn(*args, **kwargs)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 249, in save
        save_hparams_to_yaml(hparams_file, self.hparams)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 413, in save_hparams_to_yaml
        with fs.open(config_yaml, "w", newline="") as fp:
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/fsspec/spec.py", line 972, in open
        self.open(path, mode, block_size, **kwargs), **text_kwargs
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/fsspec/spec.py", line 976, in open
        f = self._open(
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/fsspec/implementations/local.py", line 145, in _open
        return LocalFileOpener(path, mode, fs=self, **kwargs)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/fsspec/implementations/local.py", line 236, in __init__
        self._open()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/fsspec/implementations/local.py", line 241, in _open
        self.f = open(self.path, mode=self.mode)
    FileNotFoundError: [Errno 2] No such file or directory: '/home/zuppif/gust-torchvision/lightning_logs/version_2/hparams.yaml'
    (venv) zuppif@blackbird:~/gust-torchvision$ /home/zuppif/gust-torchvision/venv/bin/python3 /home/zuppif/gust-torchvision/playground.py
    eehh
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    IPU available: False, using: 0 IPUs
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2]
    
      | Name          | Type       | Params
    ---------------------------------------------
    0 | train_metrics | ModuleDict | 0     
    1 | val_metrics   | ModuleDict | 0     
    2 | backbone      | Sequential | 11.2 M
    3 | head          | Sequential | 5.1 K 
    ---------------------------------------------
    14.7 K    Trainable params
    11.2 M    Non-trainable params
    11.2 M    Total params
    44.727    Total estimated model params size (MB)
    Validation sanity check: 0it [00:00, ?it/s]/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/flash/core/model.py:397: LightningDeprecationWarning: The `LightningModule.datamodule` property is deprecated in v1.3 and will be removed in v1.5. Access the datamodule through using `self.trainer.datamodule` instead.
      if self.datamodule is not None and getattr(self.datamodule, "data_pipeline", None) is not None:
    Validation sanity check:   0%|                                                                                                                                      | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
      File "/home/zuppif/gust-torchvision/playground.py", line 60, in <module>
        trainer.finetune(classifier, datamodule=dm, strategy="freeze")
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/flash/core/trainer.py", line 165, in finetune
        return super().fit(model, train_dataloader, val_dataloaders, datamodule)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
        self._run(model)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
        self._dispatch()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
        self.accelerator.start_training(self)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
        self.training_type_plugin.start_training(trainer)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
        self._results = trainer.run_stage()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
        return self._run_train()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_train
        self._run_sanity_check(self.lightning_module)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/flash/core/trainer.py", line 93, in _run_sanity_check
        super()._run_sanity_check(ref_model)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1115, in _run_sanity_check
        self._evaluation_loop.run()
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
        self.advance(*args, **kwargs)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
        dl_outputs = self.epoch_loop.run(
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
        self.advance(*args, **kwargs)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 110, in advance
        output = self.evaluation_step(batch, batch_idx, dataloader_idx)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 154, in evaluation_step
        output = self.trainer.accelerator.validation_step(step_kwargs)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 211, in validation_step
        return self.training_type_plugin.validation_step(*step_kwargs.values())
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 178, in validation_step
        return self.model.validation_step(*args, **kwargs)
      File "/home/zuppif/gust-torchvision/venv/lib/python3.8/site-packages/flash/image/classification/model.py", line 121, in validation_step
        batch = (batch[DefaultDataKeys.INPUT], batch[DefaultDataKeys.TARGET])
    TypeError: list indices must be integers or slices, not DefaultDataKeys
    (venv) zuppif@blackbird:~/gust-torchvision$ 
    

    Code sample

    class FakeData(LightningDataModule):
    
        def __init__(self, n: int = 512, n_classes: int = 10):
            super().__init__()
            self.n = n
            self.n_classes = n_classes
    
        def dataloader(self):
            imgs, labels = torch.randn((self.n, 3, 224, 224)), torch.randint(0, self.n_classes, size=(self.n, 1))
            ds = TensorDataset(imgs, labels)
            return DataLoader(ds, batch_size=32, num_workers=8)
    
        def train_dataloader(self):
            return self.dataloader()
    
        def val_dataloader(self):
            return self.dataloader()
    
        def test_dataloader(self):
            return self.dataloader()
    
    dm = FakeData()
    
    dl = dm.train_dataloader()
    num_classes = dm.n_classes
    
    metrics = M.MetricCollection({
        'accuracy': M.Accuracy(num_classes=num_classes),
        'recall': M.Recall(num_classes=num_classes),
        'f1': M.F1(num_classes=num_classes),
        'precision': M.Precision(num_classes=num_classes)
    })
    
    classifier = ImageClassifier(backbone='resnet18', num_classes=dm.n_classes, metrics=metrics)
    
    trainer = Trainer(gpus=1, max_epochs=1)
    trainer.finetune(classifier, datamodule=dm, strategy="freeze")
    
    trainer.save_checkpoint('./checkpoint.pt')
    

    Expected behavior

    It should work

    Environment

    • PyTorch Version (e.g., 1.0):
    • OS (e.g., Linux):
    • How you installed PyTorch (conda, pip, source):
    • Build command you used (if compiling from source):
    • Python version:
    • CUDA/cuDNN version:
    • GPU models and configuration:
    • Any other relevant information:

    Additional context

    bug / fix help wanted 
    opened by FrancescoSaverioZuppichini 16
  • try minimal requirements

    try minimal requirements

    What does this PR do?

    unfreeze requirements, cc: @SeanNaren

    Before submitting

    • [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    enhancement 
    opened by Borda 15
  • simplify examples

    simplify examples

    What does this PR do?

    since we do not run tests on examples anyway there is o reason to have them wrap in main especially since there is nothing else than main anyway...

    Before submitting

    • [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    documentation 
    opened by Borda 15
  • NameError: name 'Parser' is not defined

    NameError: name 'Parser' is not defined

    🐛 Bug

    When trying out object detection example, got the below error at datamodule.

    NameError: name 'Parser' is not defined
    

    Code sample

    datamodule = ObjectDetectionData.from_coco(
        train_folder="data/coco128/images/train2017/",
        train_ann_file="data/coco128/annotations/instances_train2017.json",
        val_split=0.1,
        transform_kwargs={"image_size": 512},
        batch_size=4,
    )
    
    

    To Reproduce

    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    [<ipython-input-5-56b8a0b8bc9b>](https://localhost:8080/#) in <module>
          4     val_split=0.1,
          5     transform_kwargs={"image_size": 512},
    ----> 6     batch_size=4,
          7 )
    
    5 frames
    [/usr/local/lib/python3.7/dist-packages/flash/core/integrations/icevision/data.py](https://localhost:8080/#) in load_data(self, root, ann_file, parser, parser_kwargs)
         43         parser_kwargs = {} if parser_kwargs is None else parser_kwargs
         44         unwrapped_parser = getattr(parser, "func", parser)
    ---> 45         if inspect.isclass(unwrapped_parser) and issubclass(unwrapped_parser, Parser):
         46             parser = parser(ann_file, root, **parser_kwargs)
         47         elif isinstance(unwrapped_parser, Callable):
    
    NameError: name 'Parser' is not defined
    

    Expected behavior

    It should create a data module for further computations.

    Environment

    • OS (e.g., Linux): Linux
    • Python version: Python 3.7.15
    • PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): Flash 0.8.0
    • GPU models and configuration: Colab GPU
    • Any other relevant information: Running on Google Colab
    bug / fix help wanted 
    opened by shravankumar147 14
  • Unable to generate predictions for wav2vec model fine-tuned with custom data

    Unable to generate predictions for wav2vec model fine-tuned with custom data

    Discussed in https://github.com/PyTorchLightning/pytorch-lightning/discussions/11432

    Originally posted by nayak24 January 11, 2022 Hi, I'm trying to fine-tune the baseline wav2vec model with my own audio training/test data using Lightning Flash, essentially exactly following the tutorial in this doc: https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html

    However, I am running into an issue when generating the prediction for an audio file, and I'm getting a null output:

    94.4 M Trainable params 0 Non-trainable params 94.4 M Total params 377.585 Total estimated model params size (MB) Epoch 0: 100%|█████████████████████████████████████████████████████████| 88/88 [02:43<00:00, 1.86s/it, loss=633, v_num=57, train_loss_step=750.0] Predicting: 88it [00:00, ?it/s] [['']]

    I'm not sure what the issue is, as I've only replaced the Timit dataset with my own input data for fine-tuning, and the rest of the script follows exactly from the doc above. All of the input data are wav files with the following format:

    format | 1 (uncompressed PCM) number of channel | 1 (mono) sampleRate | 16000 byteRate | 32000 blockAlign | 2 bitsPerSample (bit depth) | 16

    I'm new to PyTorch Lightning and training with wav2vec as a whole, so I'm guessing that I'm missing something obvious. Any help would be greatly appreciated!

    Here is the full script I'm running:

    import torch
    import flash
    from flash.audio import SpeechRecognition, SpeechRecognitionData
    from flash.core.data.utils import download_data
    
    #download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data")
    
    datamodule = SpeechRecognitionData.from_csv(
        input_fields="file",
        target_fields="text",
        #train_file="data/timit/train.json",
        #test_file="data/timit/test.json",
        train_file="FLT034/FLT034-TRAIN.csv",
        test_file="FLT034/FLT034-TEST.csv",
        batch_size=4,
    )
    
    #can use any wav2vec model in HuggingFace as backbone for finetuning
    model = SpeechRecognition(backbone="facebook/wav2vec2-base-960h")
    
    #create trainer and finetune model
    trainer = flash.Trainer(max_epochs=1)
    trainer.finetune(model, datamodule=datamodule, strategy='no_freeze')
    
    # predict on audio files
    #datamodule = SpeechRecognitionData.from_files(predict_files=["data/timit/example.wav"], batch_size=4)
    datamodule = SpeechRecognitionData.from_files(predict_files=["FLT034/FLT034-14.wav"], batch_size=4)
    predictions = trainer.predict(model, datamodule=datamodule)
    print(predictions)
    
    # Save Checkpoint 
    trainer.save_checkpoint("FL034_trained_model.pt") 
    

    And here is a sample of the train.csv file with the annotations:

    file,text
    "./FLT034-12.wav","Weather at one seven five eight zulu."
    "./FLT034-13.wav","Wind one niner zero at eight."
    "./FLT034-14.wav","Visibility eight ceiling eight hundred overcast."
    "./FLT034-15.wav","Temperature one five"
    "./FLT034-16.wav","Dewpoint one four"
    "./FLT034-17.wav","Altimeter three zero"
    "./FLT034-18.wav","Get both sides on a mic"
    
    question 
    opened by nayak24 13
  • TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]

    TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]

    Hi, While trying to finetune a bert-base model for multi-label text classification, I keep encountering this error. TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]. I looked around and saw people suggesting to check if there's some missing values of None values in the dataset. I've checked my dataset and its been properly preprocessed to remove an NaN and missing values. I even compared my dataset with the toy example's dataset of toxic comments, and the only difference I could see was the number of categories (in my case these are > 30).

    Can anyone please help me on this one?

    Thank you

    bug / fix help wanted 
    opened by DrRaja 12
  • Updated the learn2learn

    Updated the learn2learn "image_classification_imagenette_mini" example

    What does this PR do?

    There were some problems in the downloading of the dataset as well as defining the ImageClassificationInputTransform, for the ImageClassificationData, so I have remade the tutorial for easy integration of learn2learn with flash Fixes #1376

    Before submitting

    • [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [ ] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Definitely Make sure you had fun coding 🙃

    bug / fix 
    opened by uakarsh 12
  • fix channel dim selection on segmentation target

    fix channel dim selection on segmentation target

    What does this PR do?

    When loading mask files in semantic segmentation, the wrong axis is chosen causing a corrupted mask. The remaining pixels are then resized in the transformed function. This broke the examples in the semantic segmentation section.

    Fixes #1489

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [ ] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    bug / fix 
    opened by izikgo 0
  • Issue with `ImageClassificationData.from_dataset`

    Issue with `ImageClassificationData.from_dataset`

    🐛 Bug

    There seems to be an issue with ImageClassificationData.from_dataset method. It fails to create the expected format, where the labels can be accessed via datamodul.labels.

    To Reproduce

    The error occured with the following code adapted from the example

    ...
    
    datamodule=ImageClassificationData.from_datasets(
        train_dataset=train_dataset,
        val_dataset=valid_dataset,
        batch_size = 32
    )
    
    
    # 2. Build the task
    model = ImageClassifier(backbone="efficientnet_b0", labels=datamodule.labels)
    
    ...
    

    The datasets are created via

    ...
    
    train_val_dataset = datasets.ImageFolder(train_val_folder)
    
    ....
    
    train_dataset, valid_dataset = random_split(dataset=train_val_dataset, lengths=[no_train_images ,no_valid_images], generator=torch.Generator().manual_seed(42))
    
    

    Expected behavior

    I'd expect the from_dataset method to create a valid datamodule to use for training.

    Environment

    • OS (e.g., Linux): Colab instance
    • Python version: 3.8 I guess
    • PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): 1.13.0+cu116/1.8.6/0.8.1.post0
    bug / fix help wanted 
    opened by funnym0nk3y 0
  • Load Numpy arrays of incompatible `dtypes`

    Load Numpy arrays of incompatible `dtypes`

    What does this PR do?

    Handles the loading of .npy images of integer data-types better. Previously, the ndarrays would be unsafely cast into 'uint8'. If the original dtype were, say 'int64', the loaded PIL Image would be different from what was intended.

    Now it should throw up a warning whenever there is unsafe casting. In addition, it would attempt to load a float ndarray (instead of possibly an integer array).

    Before submitting

    • [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [ ] Did you make sure to update the documentation with your changes?
    • [ ] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    bug / fix 
    opened by souravraha 0
  • Instance Segmentation Example Broken

    Instance Segmentation Example Broken

    🐛 Bug

    The instance Segmentation example provided is not working. I have tried using the one on the docs and grabbing the latest example from GitHub. I also noticed that the link for the dataset used was outdated and did not return valid data. So I found the link to the new data on their website and used that. Essentially it appears the model is training but returns empty results as if nothing was detected. It seems to indicate either the pre-trained weights are broken or the output of the model is broken.

    https://github.com/Lightning-AI/lightning-flash/blob/cf969bcbab349c027f208168973110544c672358/flash_examples/instance_segmentation.py

    To Reproduce

    Here is a description of my environment. Windows 11 I followed the following install steps in a virtual environment

    Should be python version 3.8.15
    
    mamba create -n flash python==3.8.15
    mamba activate flash
    
    mamba install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c pytorch -c conda-forge
    pip install mmdet
    pip install mmcv
    
    Install right version of torch metrics and pytorch lightning and setuptools
    
    pip install lightning-flash[image]
    pip install pre-commit
    pip install pysolotools
    
    
    pip install icevision[all]
    
    pip install sahi==0.10.7
    
    pip install icedata
    

    I noticed many newer versions of libraries appeared to have issues with either lightning flash or ice vision or both. It took me a while to find older package versions that still maintained compatibility with everything.

    Code sample

    from functools import partial
    
    import flash
    from flash.core.utilities.imports import example_requires
    from flash.image import InstanceSegmentation, InstanceSegmentationData
    
    example_requires("image")
    
    import icedata  # noqa: E402
    
    # 1. Create the DataModule
    data_dir = icedata.pets.load_data()
    
    datamodule = InstanceSegmentationData.from_icedata(
        train_folder=data_dir,
        val_split=0.1,
        transform_kwargs=dict(image_size=(256, 256)),
        parser=partial(icedata.pets.parser, mask=True),
        batch_size=4,
    )
    
    InstanceSegmentation
    
    # 2. Build the task
    model = InstanceSegmentation(
        head="mask_rcnn",
        backbone="resnet18_fpn",
        num_classes=datamodule.num_classes,
    )
    
    # 3. Create the trainer and finetune the model
    trainer = flash.Trainer(max_epochs=1, accelerator='gpu', devices=1)
    trainer.finetune(model, datamodule=datamodule, strategy="freeze")
    
    # 4. Detect objects in a few images!
    datamodule = InstanceSegmentationData.from_files(
        predict_files=[
            str(data_dir / "images/yorkshire_terrier_9.jpg"),
            str(data_dir / "images/yorkshire_terrier_12.jpg"),
            str(data_dir / "images/yorkshire_terrier_13.jpg"),
        ],
        batch_size=4,
    )
    predictions = trainer.predict(model, datamodule=datamodule)
    print(predictions)
    
    # 5. Save the model!
    trainer.save_checkpoint("instance_segmentation_model.pt")
    

    Expected behavior

    I would expect this to proper output data as indicated by the tutorial in the docs. There should be detections present even after 1 epoch of transfer learning.

    bug / fix help wanted 
    opened by gatordevin 0
  • Support for pytorch 1.13

    Support for pytorch 1.13

    🚀 Feature

    Support for pytorch 1.13 and MPS

    Motivation

    I can't take advantage of MPS in macOS since is a recently added feature and lightning-flash didn't allow usage above torch 1.10

    Alternatives

    I already make config changes to force the installation with newer torch and torchvision versions, but seems that 'nms' isn't available on torchvision.ops.boxes and icevision request that.

    enhancement help wanted 
    opened by benjats07 1
Releases(0.8.1.post0)
  • 0.8.1.post0(Jan 5, 2023)

    What's Changed

    • fixed type of 'n_gram' from bool to int in TranslationTask by @BrightXiaoHan in https://github.com/Lightning-AI/lightning-flash/pull/1486
    • pinned torchmetrics version for compatibility by @Borda in https://github.com/Lightning-AI/lightning-flash/pull/1495
    • pinned sahi to fix object detection when installing in a fresh environment by @ethanwharris in https://github.com/Lightning-AI/lightning-flash/pull/1496
    • pinned numpy for type compatibility by @Borda in https://github.com/Lightning-AI/lightning-flash/pull/1504

    New Contributors

    • @BrightXiaoHan made their first contribution in https://github.com/Lightning-AI/lightning-flash/pull/1486
    • @kjappelbaum made their first contribution in https://github.com/Lightning-AI/lightning-flash/pull/1503

    Full Changelog: https://github.com/Lightning-AI/lightning-flash/compare/0.8.1...0.8.1.post0

    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Nov 8, 2022)

    What's Changed

    • Add CLIP backbones for text / image classification by @ethanwharris in https://github.com/Lightning-AI/lightning-flash/pull/1458
    • Replace DP/DDP/DDPSpawn plugins to strategies, keep the old for compatibility by @krshrimali in https://github.com/Lightning-AI/lightning-flash/pull/1451
    • Integration of lightning_utilties function into flash by @uakarsh in https://github.com/Lightning-AI/lightning-flash/pull/1457
    • refactored image_classifier_head to classifier_head by @Abelarm in https://github.com/Lightning-AI/lightning-flash/pull/1464
    • Raise better error if icevision not installed if module isn't found (loading data) by @krshrimali in https://github.com/Lightning-AI/lightning-flash/pull/1474
    • Add support for Lightning 1.8 + Fixes for the CI by @krshrimali in https://github.com/Lightning-AI/lightning-flash/pull/1470 and https://github.com/Lightning-AI/lightning-flash/pull/1479
    • Fix compatibility with TM 0.10 by @ethanwharris in https://github.com/Lightning-AI/lightning-flash/pull/1469

    New Contributors

    • @Abelarm made their first contribution in https://github.com/Lightning-AI/lightning-flash/pull/1464

    Full Changelog: https://github.com/Lightning-AI/lightning-flash/compare/0.8.0...0.8.1

    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Sep 5, 2022)

    We are elated to announce the release of Lightning Flash v0.8, a feature-rich release with improved testing to ensure better user experience for all our lovely users! The team at Lightning AI and our community contributors have been working hard for this release, and nothing makes us happier to share all their lovely contributions with you.

    We discuss major features and changes below. For a curated list, scroll to the bottom to see all the pull requests included for this release.

    TPU Support 🦸🏻

    Before this release, Lightning Flash worked well on a single-core TPU (training, validation, and prediction), but failed comprehensively on multiple cores. This release has enabled training and validation support for multi-core TPUs, allowing users to try out their models on TPUs using Lightning Flash. Prediction of multi-core TPUs is an ongoing effort, and we hope to bring it to you in the near future.

    Before v0.8 After v0.8
    Single core Training, Validation, Prediction Training, Validation, Prediction
    Multiple cores Not supported Training, Validation

    As we move ahead, and we see more users trying the TPUs with Lightning Flash, we expect that there might be unseen errors or issues, and we will be looking forward to addressing them as we get a chance. So please don't hesitate to let us know your experience!

    Remote Data Loading: fsspec arrives into Lightning Flash ☁️

    Before this release, users had to download a dataset or a file from the URL and pass it to our data loader classes. This was a pain point that we are happy to let go of in this release. Starting v0.8, you'll not have to download any of those files locally, and you can just pass the file URL - and expect it to work!

    Before v0.8 After v0.8
    Example Download titanic.csv from the URL and pass the path to the train_file argument:
    from flash.tabular import TabularClassificationData
    
    datamodule = TabularClassificationData.from_csv(
        categorical_fields=["Age", "Cabin"],
        numerical_fields="Fare",
        target_fields="Survived",
        train_file="titanic.csv",
        val_split=0.1,
        batch_size=8,
    )
    

    Just pass the URL to train_file argument:

    from flash.tabular import TabularClassificationData
    
    datamodule = TabularClassificationData.from_csv(
        categorical_fields=["Age", "Cabin"],
        numerical_fields="Fare",
        target_fields="Survived",
        train_file="https://pl-flash-data.s3.amazonaws.com/titanic.csv",
        val_split=0.1,
        batch_size=8,
    )
    

    For more details, feel free to check out the documentation here.

    Video Classification from Tensors 📹

    At times, it's required to load raw data, or pre-process videos before progressing to loading data and training the model. These raw data for Video Classification, are mostly available as tensors, and before this release - one had to save them again in video files, and pass the paths to the data loading classes in Flash. Starting this release, we now support loading data from tensors for Video Classification.

    import torch
    from flash.video import VideoClassifier, VideoClassificationData
    import flash
    
    # 5 number of frames, 3 channels, height = 10 and width = 10
    mock_tensors = torch.randint(size=(3, 5, 10, 10), low=0, high=255)
    datamodule = VideoClassificationData.from_tensors(
        train_data=[mock_tensors, mock_tensors],  # can also stack: torch.stack((mock_tensors, mock_tensors))
        train_targets=["patient", "doctor"],
        predict_data=[mock_tensors],
        batch_size=1,
    )
    
    model = VideoClassifier(num_classes=datamodule.num_classes, pretrained=False, backbone="slow_r50", labels=datamodule.labels)
    trainer = flash.Trainer(max_epochs=1)
    trainer.finetune(model, datamodule=datamodule)
    

    This will also come in handy for those having multi-modal pipelines who don't want to save the output of a model to files and instead pass the raw data to the next model, saving you quite a lot of time wasted in the conversion process.

    Refactored Transforms in Lightning Flash ⚙️

    One of the community-driven contributions that we are proud to share. Before this release, a user had to pass an input transform class for each stage, which was cumbersome. With this release, you can just pass transform=<YourTransformClass> to the required method. This is a breaking change, and if you are not sure how to resolve this, please create an issue and we'll be happy to help!

    Before v0.8 After v0.8
    Example
    dm = XYZTask_DataModule.from_xyz(
        train_file=train_file,
        val_file=val_file,
        test_file=test_file,
        predict_file=predict_file,
        train_transform=InputTransform,
        val_transform=InputTransform,
        test_transform=InputTransform,
        predict_transform=InputTransform,
        transform_kwargs=transform_kwargs,
    )
    
    dm = XYZTask_DataModule.from_xyz(
        train_file=train_file,
        val_file=val_file,
        test_file=test_file,
        predict_file=predict_file,
        transform=InputTransform(**transform_kwargs),
    )
    

    Note that, within your InputTransform class, you can have <stage>_per_batch_transform_on_device methods to support various stages.

    class SampleInputTransform(InputTransform):
        def per_sample_transform(self):
            def fn(x):
                return x
           return fn
    
        def train_per_batch_transform_on_device(self) -> Callable:
            return ...
    
        def val_per_batch_transform_on_device(self) -> Callable:
            return ...
    
        def test_per_batch_transform_on_device(self) -> Callable:
            return ...
    
        def predict_per_batch_transform_on_device(self) -> Callable:
            return ...
    

    Object Detection in Flash is now servable 💁

    If you aren't aware yet, Lightning Flash supports serving models. Starting this release, Object Detection is added to the beautiful category of tasks that can be served using Lightning Flash. Below is an example of how the inference server code for object detection will look like:

    # Inference Server
    from flash.image import ObjectDetector
    
    model = ObjectDetector.load_from_checkpoint("https://flash-weights.s3.amazonaws.com/0.8.0/object_detection_model.pt")
    model.serve()
    

    For more details, check out the documentation here.

    Added

    • Added support for from_tensors for VideoClassification (#1389)
    • Added fine tuning strategies for DeepSpeed (with parameter loading and storing omitted) (#1377)
    • Added torchvision as a requirement to datatype_audio.txt as it's used for Audio Classification (#1425)
    • Added figsize and limit_nb_samples for showing batch images (#1381)
    • Added support for from_lists for Tabular Classification and Regression (#1337)
    • Added support for from_dicts for Tabular Classification and Regression (#1331)
    • Added support for using the ImageEmbedder SSL training for all image classifier backbones (#1264)
    • Added support for audio file formats to AudioClassificationData (#1085)
    • Added support for Flash serve to the ObjectDetector (#1370)
    • Added support for loading ImageClassificationData from PIL images with from_images (#1372)
    • Added support for loading ObjectDetectionData with from_numpy, from_images, and from_tensors (#1372)
    • Added support for remote data loading with fsspec (#1387)
    • Added support for TSV files to from_csv methods (#1387)
    • Added support for more formats when loading audio files (#1387)
    • Added support to use any task as an embedder by calling as_embedder (#1396)
    • Added support for normalization of images in SemanticSegmentationData (#1399)

    Changed

    • Changed the ImageEmbedder dependency on VISSL to optional (#1276)
    • Changed the transforms in SemanticSegmentationData to use albumentations instead of Kornia (#1313)

    Removed

    • Removed support for audio files with sd2 extension, because SoundFile (for sd2 extension) doesn't accept fsspec objects. (#1409)

    Fixed

    • Fixed when suitable error not being raised for image segmentation (kornia) (#1425).
    • Fixed the script of integrating lightning-flash with learn2learn (#1376)
    • Fixed JIT tracing tests where the model class was not attached to the Trainer class (#1410)
    • Fixed examples for BaaL integration by removing usage of on_<stage>_dataloader hooks (removed in PL 1.7.0) (#1410)
    • Fixed examples for BaaL integration for the case when probabilities list is empty (#1410)
    • Fixed a bug where collate functions were not being attached successfully after the DataLoader is initialized (in PL 1.7.0 changing attributes after initialization doesn't do anything) (#1410)
    • Fixed a bug where grayscale images were not properly converted to RGB when loaded. (#1394)
    • Fixed a bug where size of mask for instance segmentation doesn't match to size of original image. (#1353)
    • Fixed image classification data show_train_batch for subplots with rows > 1. (#1339)
    • Fixed support for all the versions (including the latest and older) of baal. (#1315)
    • Fixed a bug where a loaded TabularClassifier or TabularRegressor checkpoint could not be served (#1324)
    • Fixed a bug where the freeze_unfreeze and unfreeze_milestones finetuning strategies could not be used in tandem with a onecyclelr LR scheduler (#1329)
    • Fixed a bug where the backbone learning rate would be divided by 10 when unfrozen if using the freeze_unfreeze or unfreeze_milestones strategies (#1329)
    • Fixed naming of optimizer and scheduler registries which did not allow manual optimization. (#1342)
    • Fixed a bug where the processor_backbone argument to SpeechRecognition was not used for decoding outputs (#1362)
    • Fixed a bug where .npy files could not be used with SemanticSegmentationData (#1369)

    Contributors

    @akihironitta @aniketmaurya @Borda @carmocca @ethanwharris @JustinGoheen @krshrimali @ligaz @Nico995 @uakarsh

    If we forgot someone let us know :smiley:

    Source code(tar.gz)
    Source code(zip)
  • 0.7.5(May 11, 2022)

    [0.7.5] - 2022-05-11

    Fixed

    • Fixed image classification data show_train_batch for subplots with rows > 1. (https://github.com/PyTorchLightning/lightning-flash/pull/1315)
    • Fixed support for all the versions (including the latest and older) of baal. (https://github.com/PyTorchLightning/lightning-flash/pull/1315)
    • Fixed a bug where a loaded TabularClassifier or TabularRegressor checkpoint could not be served (https://github.com/PyTorchLightning/lightning-flash/pull/1324)
    • Fixed a bug where the freeze_unfreeze and unfreeze_milestones finetuning strategies could not be used in tandem with a onecyclelr LR scheduler (https://github.com/PyTorchLightning/lightning-flash/pull/1329)
    • Fixed a bug where the backbone learning rate would be divided by 10 when unfrozen if using the freeze_unfreeze or unfreeze_milestones strategies (https://github.com/PyTorchLightning/lightning-flash/pull/1329)

    Contributors

    @Borda @ethanwharris @kaushikb11 @krshrimali

    If we forgot someone let us know :smiley:

    Source code(tar.gz)
    Source code(zip)
  • 0.7.4(Apr 27, 2022)

    [0.7.4] - 2022-04-27

    Fixed

    • Fixed a bug where LR schedulers from HuggingFace could not be used with newer versions of PyTorch Lightning (#1307)
    • Fixed a bug where the default Flash zero configurations for ObjectDetector, InstanceSegmentation, and KeypointDetector would error with the latest version of some requirements (#1306)
    • Fixed plain LightningModule support for Flash data modules. (#1281)

    Contributors

    @Borda @ethanwharris @krshrimali @rohitgr7

    If we forgot someone let us know :smiley:

    Source code(tar.gz)
    Source code(zip)
  • 0.7.3(Apr 13, 2022)

    [0.7.3] - 2022-04-13

    Fixed

    • Fixed a bug where some backbones were incorrectly listed as available for the ObjectDetector, InstanceSegmentation, and KeypointDetector (#1267)
    • Fixed a bug where the backbone would not be frozen when finetuning the SpeechRecognition task (#1275)
    • Fixed a bug where the backbone would not be frozen when finetuning the QuestionAnswering task with certain model types (#1275)

    Contributors

    @ethanwharris @krshrimali

    If we forgot someone let us know :smiley:

    Source code(tar.gz)
    Source code(zip)
  • 0.7.2(Mar 30, 2022)

    [0.7.2] - 2022-03-30

    Fixed

    • Fixed examples (question answering), where NLTK's punkt module needs to be downloaded first. (#1215)
    • Fixed normalizing inputs to video classification (#1213)
    • Fixed a bug where pretraining_transforms in the ImageEmbedder was never called. (1196)
    • Fixed a bug where BASE_MODEL_NAME was not in the dict for dino and moco strategies. (1196)
    • Fixed support for torch==1.11.0 (#1234)
    • Fixed DDP spawn support for ObjectDetector, InstanceSegmentation, and KeypointDetector (#1222)
    • Fixed a bug where InstanceSegmentation would fail if samples had an inconsistent number of bboxes, labels, and masks (these will now be treated as negative samples) (#1222)
    • Fixed a bug where collate functions were never called in the ImageEmbedder class. (#1217)
    • Fixed a bug where ObjectDetector, InstanceSegmentation, and KeypointDetector would log train and validation metrics with the same name (#1252)
    • Fixed a bug where using ReduceLROnPlateau would raise an error (#1251)
    • Fixed GPU support for self-supervised training with the ImageEmbedder (#1256)

    Contributors

    @aisensiy @andife @aniketmaurya @Borda @dudeperf3ct @ethanwharris @krshrimali

    If we forgot someone let us know :smiley:

    Source code(tar.gz)
    Source code(zip)
  • 0.7.1(Mar 1, 2022)

    [0.7.1] - 2022-03-01

    Added

    • Added the normalization parameters of torchvision.transforms.Normalize as transform_kwargs in the ImageClassificationInputTransform (#1178)
    • Added available_outputs method to the Task (#1206)

    Fixed

    • Fixed a bug where DDP would not work with Flash tasks (#1182)
    • Fixed DDP support for VideoClassifier (#1189)
    • Fixed a bug where buffers in loss functions were not correctly registered in the Task (#1203)
    • Fixed support for passing a sampler instance to from_* methods / the DataModule (#1204)

    Contributors

    @aisensiy @AndresAlgaba @Borda @ethanwharris

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.7.0(Feb 15, 2022)

    [0.7.0] - 2022-02-15

    Added

    • Added support for multi-label, space delimited, targets (#1076)
    • Added support for tabular classification / regression backbones from PyTorch Tabular (#1098)
    • Added Flash zero support for tabular regression (#1098)
    • Added support for COCO annotations with non-default keypoint labels to KeypointDetectionData.from_coco (#1102)
    • Added support for from_csv and from_data_frame to VideoClassificationData (#1117)
    • Added support for SemanticSegmentationData.from_folders where mask files have different extensions to the image files (#1130)
    • Added FlashRegistry of Available Heads for flash.image.ImageClassifier (#1152)
    • Added support for ObjectDetectionData.from_files (#1154)
    • Added support for passing the Output object (or a string e.g. "labels") to the flash.Trainer.predict method (#1157)
    • Added support for passing the TargetFormatter object to from_* methods for classification to override target handling (#1171)

    Changed

    • Changed Wav2Vec2Processor to AutoProcessor and seperate it from backbone [optional] (#1075)
    • Renamed ClassificationInput to ClassificationInputMixin (#1116)
    • Changed the default learning_rate for all tasks to be None, corresponding to the default for your chosen optimizer (#1172)

    Fixed

    • Fixed a bug when not explicitly passing embedding_sizes to the TabularClassifier and TabularRegressor tasks (#1067)
    • Fixed a bug where under some circumstances transforms would not get called (#1072)
    • Fixed a bug where prediction would sometimes give the wrong number of outputs (#1077)
    • Fixed a bug where passing the val_split to the DataModule would not have the desired effect (#1079)
    • Fixed a bug where passing predict_data_frame to ImageClassificationData.from_data_frame raised an error (#1088)
    • Fixed a bug where segmentation files / masks were loaded with an inconsistent ordering (#1094)
    • Fixed a bug with AudioClassificationData.from_numpy (#1096)
    • Fixed a bug when using SpeechRecognitionData.from_files for training / validating / testing (#1097)
    • Fixed a bug when using SpeechRecognitionData.from_csv or from_json when predicting without targets (#1097)
    • Fixed a bug where SpeechRecognitionData.from_datasets did not work as expected (#1097)
    • Fixed a bug where loading data for prediction with SemanticSegmentationData.from_folders raised an error (#1101)
    • Fixed a bug when passing a predict_folder argument to from_coco / from_voc / from_via in IceVision tasks (#1102)
    • Fixed ObjectDetectionData.from_voc and ObjectDetectionData.from_via (#1102)
    • Fixed a bug where InstanceSegmentationData.from_coco would raise an error if not using file-based masks (#1102)
    • Fixed InstanceSegmentationData.from_voc (#1102)
    • Fixed a bug when loading tabular data for prediction without a target field / column (#1114)
    • Fixed a bug when loading prediction data for graph classification without targets (#1121)
    • Fixed a bug where loading Seq2Seq data for prediction would not work if the target field was not present (#1128)
    • Fixed a bug where from_fiftyone classmethods did not work correctly with a predict_dataset (#1136)
    • Fixed a bug where the labels property would return None when using ObjectDetectionData.from_fiftyone (#1136)
    • Fixed a bug where TabularData would not work correctly with no categorical variables (#1144)
    • Fixed a bug where loading TabularForecastingData for prediction would only yield a single sample per series (#1149)
    • Fixed a bug where backbones for the ObjectDetector, KeypointDetector, and InstanceSegmentation tasks were not always frozen correctly when finetuning (#1163)
    • Fixed a bug where DataModule.multi_label would sometimes be None when it had been inferred to be False (#1165)

    Removed

    • Removed the Seq2SeqData base class (use TranslationData or SummarizationData directly) (#1128)
    • Removed the ability to attach the Output object directly to the model (#1157)

    Contributors

    @Actis92 @AjinkyaIndulkar @bartonp2 @Borda @daMichaelB @ethanwharris @flozi00 @karthikrangasai @MikeTrizna

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.7.0rc0(Feb 4, 2022)

  • 0.6.0(Dec 13, 2021)

    [0.6.0] - 2021-12-13

    Added

    • Added TextEmbedder task (#996)
    • Added predict_kwargs in ObjectDetector, InstanceSegmentation, KeypointDetector (#990)
    • Added backbones for GraphClassifier (#592)
    • Added GraphEmbedder task (#592)
    • Added support for comma delimited multi-label targets to the ImageClassifier (#997)
    • Added datapipeline_state on dataset creation within the from_* methods from the DataModule (#1018)

    Changed

    • Changed DataSource to Input (#929)
    • Changed Preprocess to InputTransform (#951)
    • Changed classes named *Serializer and properties / variables named serializer to be *Output and output respectively (#927)
    • Changed Postprocess to OutputTransform (#942)
    • Changed loading of RGBA images to drop alpha channel by default (#946)
    • Updated FlashFinetuning callback to use separate hooks that lets users use the freezing logic provided out-of-the-box from flash, route FlashFinetuning through a registry. (#830)
    • Changed the SpeechRecognition task to use AutoModelForCTC rather than just Wav2Vec2ForCTC (#874)
    • Changed the Deserializer to subclass ServeInput (#1013)
    • Added Output suffix to Preds, FiftyOneDetectionLabels, SegmentationLabels, FiftyOneDetectionLabels, DetectionLabels, Classes, FiftyOneLabels, Labels, Logits, Probabilities (#1011)
    • Changed from_files and from_folders from ObjectDetectionData, InstanceSegmentationData, KeypointDetectionData to support only the predicting stage (#1018)
    • Changed Image Classification Task to use the new DataModule API (#1025)

    Deprecated

    • Deprecated flash.core.data.process.Serializer in favour of flash.core.data.io.output.Output (#927)
    • Deprecated Task.serializer in favour of Task.output (#927)
    • Deprecated flash.text.seq2seq.core.metrics in favour of torchmetrics[text] (#648)
    • Deprecated flash.core.data.data_source.DefaultDataKeys in favour of flash.DataKeys (#929)
    • Deprecated data_source argument to flash.Task.predict in favour of input (#929)

    Fixed

    • Fixed a bug where using image classification with DDP spawn would trigger an infinite recursion (#969)
    • Fixed a bug where Flash could not be used with IceVision 0.11.0 (#989)
    • Fixed a bug where backbone weights were sometimes not frozen correctly (#992)
    • Fixed a bug where translation metrics were not computed correctly (#992)
    • Fixed a bug where additional DataModule keyword arguments could not be configured with Flash Zero for some tasks (#994)
    • Fixed a bug where the TabularForecaster would not work with some versions of pandas (#995)

    Removed

    • Removed OutputMapping (#939)
    • Removed Output.enable and Output.disable (#939)
    • Removed OutputTransform.save_sample and save_data hooks (#948)
    • Removed InputTransform pre_tensor_transform, to_tensor_transform, post_tensor_transform hooks in favour of per_sample_transform (#1010)
    • Removed Task.predict, use Trainer.predict instead (#1030)
    • Removed the backbone argument from TextClassificationData, it is now sufficient to only provide a backbone argument to the TextClassifier (#1022)
    • Removed support for the serve_sanity_check argument in flash.Trainer (#1062)

    Contributors

    @abhijithneilabraham @Actis92 @alexheat @ananyahjha93 @Borda @ethanwharris @flozi00 @karthikrangasai @PabloAMC @SkafteNicki @tchaton

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.5.2(Nov 5, 2021)

    [0.5.2] - 2021-11-05

    Added

    • Added a TabularForecaster task based on PyTorch Forecasting (#647)
    • Added a TabularRegressor task (#892)

    Fixed

    • Fixed a bug where test metrics were not logged correctly with active learning (#879)
    • Fixed a bug where validation metrics could be aggregated together with test metrics in some cases (#900)
    • Fixed a bug where the latest versions of torchmetrics and Lightning Flash could not be installed together (#902)
    • Fixed compatibility with PyTorch-Lightning 1.5 (#933)

    Contributors

    @aniketmaurya @awaelchli @Borda @Dref360 @ethanwharris @pietrolesci @sumanmichael @twsl

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.5.1(Oct 26, 2021)

    [0.5.1] - 2021-10-26

    Added

    • Added LabelStudio integration (#554)
    • Added support learn2learn training_strategy for ImageClassifier (#737)
    • Added vissl training_strategies for ImageEmbedder (#682)
    • Added support for from_data_frame to TextClassificationData (#785)
    • Added FastFace integration (#606)
    • Added support for from_lists to TextClassificationData (#805)

    Changed

    • Changed the default num_workers on linux to 0 (matching the default for other OS) (#759)
    • Optimizer and LR Scheduler registry are used to get the respective inputs to the Task using a string (or a callable). (#777)

    Fixed

    • Fixed a bug where additional kwargs (e.g. sampler) passed to tabular data would be ignored (#792)
    • Fixed a bug where loading text data with additional non-numeric columns (not input or target) would give an error (#888)

    New Contributors

    • @bamblebam made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/735
    • @dlangerm made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/765
    • @pietrolesci made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/767
    • @gianscarpe made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/776
    • @kingyiusuen made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/785
    • @Isaac-Flath made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/799
    • @KonstantinKorotaev made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/554
    • @borhanMorphy made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/606
    • @EStorm21 made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/837
    • @parmidaatg made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/822
    • @Darktex made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/824
    • @Dref360 made their first contribution in https://github.com/PyTorchLightning/lightning-flash/pull/861

    PR List

    • Bump version to 0.5.1dev by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/749
    • Docs for backbones by @bamblebam in https://github.com/PyTorchLightning/lightning-flash/pull/735
    • Refactor unnecessary else / elif when if block has a return statement by @deepsource-autofix in https://github.com/PyTorchLightning/lightning-flash/pull/751
    • Clean up docs by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/754
    • Set logo to have a white background for dark mode by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/757
    • New README by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/756
    • Speed up and fix graph tests by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/759
    • Feature/output data keys by @dlangerm in https://github.com/PyTorchLightning/lightning-flash/pull/765
    • Fix logo spacing by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/766
    • Move text backbones in separate module by @pietrolesci in https://github.com/PyTorchLightning/lightning-flash/pull/767
    • Speed up question answering tests by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/775
    • [PoC] Add MetaLearning support through learn2learn by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/737
    • [Readme] Add training strategies by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/780
    • VISSL initial integration by @ananyahjha93 in https://github.com/PyTorchLightning/lightning-flash/pull/682
    • Add thumbnails to card items by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/787
    • Document object detector augmentations by @gianscarpe in https://github.com/PyTorchLightning/lightning-flash/pull/776
    • Fix RTD build by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/789
    • VISSL collate function/transforms restructure by @ananyahjha93 in https://github.com/PyTorchLightning/lightning-flash/pull/786
    • TextClassificationData from_dataframe by @kingyiusuen in https://github.com/PyTorchLightning/lightning-flash/pull/785
    • [Doc] Add learn2learn integrations documentation by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/788
    • [Doc] VISSL docs by @ananyahjha93 in https://github.com/PyTorchLightning/lightning-flash/pull/794
    • Add sampler argument to tabular data by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/792
    • [Feat] Add ActiveLearning Loop Customization v2 by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/779
    • Update Readme by @Isaac-Flath in https://github.com/PyTorchLightning/lightning-flash/pull/799
    • Update flash_zero.rst by @williamFalcon in https://github.com/PyTorchLightning/lightning-flash/pull/796
    • Add question answering thumbnail by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/810
    • enable persistent workers for train and val dataloaders by @dlangerm in https://github.com/PyTorchLightning/lightning-flash/pull/812
    • Adding integration with Label Studio by @KonstantinKorotaev in https://github.com/PyTorchLightning/lightning-flash/pull/554
    • Add from_lists to TextClassificationData by @kingyiusuen in https://github.com/PyTorchLightning/lightning-flash/pull/805
    • [Doc] by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/813
    • Face Detection Task (task-a-thon) by @borhanMorphy in https://github.com/PyTorchLightning/lightning-flash/pull/606
    • add Kaggle links by @Borda in https://github.com/PyTorchLightning/lightning-flash/pull/826
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/PyTorchLightning/lightning-flash/pull/831
    • Add val_loss and test_loss calculation and logging for QnA task by @karthikrangasai in https://github.com/PyTorchLightning/lightning-flash/pull/832
    • Fix typo in learn2learn example by @EStorm21 in https://github.com/PyTorchLightning/lightning-flash/pull/837
    • HotFix for doc build on master by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/849
    • Bump version to 0.5.1rc0 by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/850
    • Add FlashDataset by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/851
    • Add FlashDataset update by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/853
    • Missing docstring on methods by @SkafteNicki in https://github.com/PyTorchLightning/lightning-flash/pull/854
    • Add PreprocessTransform by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/852
    • added query_size, and initial_num_labels. removed num_labels_randomly… by @parmidaatg in https://github.com/PyTorchLightning/lightning-flash/pull/822
    • [bugfix] Change to torchmetrics instead of PL by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/858
    • [bugfix] Resolve bug with Lightning 1.5.0rc0 by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/859
    • Freeze structlog version by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/860
    • Add support for PreprocessTransform to FlashDatasets by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/856
    • Fix VideoClassificationData.from_files() not working by @Darktex in https://github.com/PyTorchLightning/lightning-flash/pull/824
    • Fix predict DataLoader in Active learning by @Dref360 in https://github.com/PyTorchLightning/lightning-flash/pull/861
    • Fix inference for instance segmentation by @SeanNaren in https://github.com/PyTorchLightning/lightning-flash/pull/857
    • 2/n Add Custom Data Loading Tutorial + API improvement. by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/855
    • Rename PreprocessTransform to InputTransform by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/868
    • Add Serving to RunningStage by @tchaton in https://github.com/PyTorchLightning/lightning-flash/pull/872
    • Refactor text data loading by @pietrolesci in https://github.com/PyTorchLightning/lightning-flash/pull/870
    • PoC: Revamp optimizer and scheduler experience using registries by @karthikrangasai in https://github.com/PyTorchLightning/lightning-flash/pull/777
    • VISSL datapipeline fix by @ananyahjha93 in https://github.com/PyTorchLightning/lightning-flash/pull/880
    • Fix RTD Build by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/887
    • Fix text classification data loading by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/888
    • Update docutils package version in requirements by @awaelchli in https://github.com/PyTorchLightning/lightning-flash/pull/891
    • Bump version to 0.5.1 by @ethanwharris in https://github.com/PyTorchLightning/lightning-flash/pull/890
    Source code(tar.gz)
    Source code(zip)
  • 0.5.1rc0(Oct 11, 2021)

  • 0.5.0(Sep 7, 2021)

    [0.5.0] - 2021-09-07

    Added

    • Added support for (input, target) style datasets (e.g. torchvision) to the from_datasets method (#552)
    • Added support for from_csv and from_data_frame to ImageClassificationData (#556)
    • Added SimCLR, SwAV, Barlow-twins pretrained weights for resnet50 backbone in ImageClassifier task (#560)
    • Added support for Semantic Segmentation backbones and heads from segmentation-models.pytorch (#562)
    • Added support for nesting of Task objects (#575)
    • Added PointCloudSegmentation Task (#566)
    • Added PointCloudObjectDetection Task (#600)
    • Added a GraphClassifier task (#73)
    • Added the option to pass pretrained as a string to SemanticSegmentation to change pretrained weights to load from segmentation-models.pytorch (#587)
    • Added support for field parameter for loadng JSON based datasets in text tasks. (#585)
    • Added AudioClassificationData and an example for classifying audio spectrograms (#594)
    • Added a SpeechRecognition task for speech to text using Wav2Vec (#586)
    • Added Flash Zero, a zero code command line ML platform built with flash (#611)
    • Added support for .npy and .npz files to ImageClassificationData and AudioClassificationData (#651)
    • Added support for from_csv to the AudioClassificationData (#651)
    • Added option to pass a resolver to the from_csv and from_pandas methods of ImageClassificationData, which is used to resolve filenames given IDs (#651)
    • Added integration with IceVision for the ObjectDetector (#608)
    • Added keypoint detection task (#608)
    • Added instance segmentation task (#608)
    • Added Torch ORT support to Transformer based tasks (#667)
    • Added support for flash zero with the InstanceSegmentation and KeypointDetector tasks (#672)
    • Added support for in_chans argument to the flash ResNet to control the expected number of input channels (#673)
    • Added a QuestionAnswering task for extractive question answering (#607)
    • Added automatic unwrapping of IceVision prediction objects (#727)
    • Added support for the ObjectDetector with FiftyOne (#727)
    • Added support for MP3 files to the SpeechRecognition task with librosa (#726)
    • Added support for from_numpy and from_tensors to AudioClassificationData (#745)

    Changed

    • Changed how pretrained flag works for loading weights for ImageClassifier task (#560)
    • Removed bolts pretrained weights for SSL from ImageClassifier task (#560)
    • Changed the behaviour of the sampler argument of the DataModule to take a Sampler type rather than instantiated object (#651)
    • Changed arguments to ObjectDetector, use head instead of model and append _fpn to the backbone name instead of the fpn argument (#608)

    Fixed

    • Fixed a bug where serve sanity checking would not be triggered using the latest PyTorchLightning version (#493)
    • Fixed a bug where train and validation metrics weren't being correctly computed (#559)
    • Fixed a bug where an uncaught ValueError could be raised when checking if a module is available (#615)
    • Fixed a bug where some tasks were not compatible with PyTorch 1.7 due to use of torch.jit.isinstance (#611)
    • Fixed a bug where custom samplers would not be properly forwarded to the data loader (#651)
    • Fixed a bug where it was not possible to pass no metrics to the ImageClassifier or TestClassifier (#660)
    • Fixed a bug where drop_last would be set to True during prediction and testing (#671)
    • Fixed a bug where flash was not compatible with pytorch-lightning >= 1.4.3 (#690)

    Contributors

    @ananyahjha93 @aniketmaurya @aribornstein @Borda @ethanwharris @flozi00 @hhsecond @hihunjin @karthikrangasai @Kinyugo @PeppeSaccardi @pmeier @SeanNaren @sumanmichael @tchaton @tszumowski

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.5.0rc0(Sep 1, 2021)

  • 0.4.0(Jun 22, 2021)

    [0.4.0] - 2021-06-22

    Added

    • Added integration with FiftyOne (#360)
    • Added flash.serve (#399)
    • Added support for torch.jit to tasks where possible and documented task JIT compatibility (#389)
    • Added option to provide a Sampler to the DataModule to use when creating a DataLoader (#390)
    • Added support for multi-label text classification and toxic comments example (#401)
    • Added a sanity checking feature to flash.serve (#423)

    Changed

    • Split backbone argument to SemanticSegmentation into backbone and head arguments (#412)

    Fixed

    • Fixed a bug where the DefaultDataKeys.METADATA couldn't be a dict (#393)
    • Fixed a bug where the SemanticSegmentation task would not work as expected with finetuning callbacks (#412)
    • Fixed a bug where predict batches could not be visualized with ImageClassificationData (#438)

    Contributors

    @ehofesmann @ethanwharris @fstroth @lillekemiker @tchaton

    Additional credits to @rlizzo @hhsecond @lantiga @luiscape for building the Flash Serve Engine.

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.3.2(Jun 8, 2021)

  • 0.3.1(Jun 8, 2021)

    [0.3.1] - 2021-06-08

    Added

    • Added deeplabv3, lraspp, and unet backbones for the SemanticSegmentation task #370

    Changed

    • Changed the installation command for extra features #346
    • Change resize interpolation default mode to nearest #352

    Deprecated

    • Deprecated SemanticSegmentation backbone names torchvision/fcn_resnet50 and torchvision/fcn_resnet101, use fc_resnet50 and fcn_resnet101 instead #370

    Fixed

    • Fixed flash.Trainer.add_argparse_args not adding any arguments #343
    • Fixed a bug where the translation task wasn't decoding tokens properly #332
    • Fixed a bug where huggingface tokenizers were sometimes being pickled #332
    • Fixed issue with KorniaParallelTransforms to assure to share the random state between transforms #351
    • Fixed a bug where using val_split with overfit_batches would give an infinite recursion #375
    • Fixed a bug where some timm models were mistakenly given a global_pool argument #377
    • Fixed flash.Trainer.from_argparse_args not passing arguments correctly #380

    Contributors

    @akihironitta @aribornstein @carmocca @deepseek-eoghan @edgarriba @ethanwharris

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(May 20, 2021)

    [0.3.0] - 2021-05-20

    Added

    • Added DataPipeline API (#188 #141 #207)
    • Added timm integration (#196)
    • Added BaseViz Callback (#201)
    • Added backbone API (#204)
    • Added support for Iterable auto dataset (#227)
    • Added multi label support (#230)
    • Added support for schedulers (#232)
    • Added visualisation callback for image classification (#228)
    • Added Video Classification task (#216)
    • Added Dino backbone for image classification (#259)
    • Added Data Sources API (#256 #264 #272)
    • Refactor preprocess_cls to preprocess, add Serializer, add DataPipelineState (#229)
    • Added Semantic Segmentation task (#239 #287 #290)
    • Added Object detection prediction example (#283)
    • Added Style Transfer task and accompanying finetuning and prediction examples (#262)
    • Added a Template task and tutorials showing how to contribute a task to flash (#306)

    Changed

    • Rename valid_ to val_ (#197)
    • Refactor preprocess_cls to preprocess, add Serializer, add DataPipelineState (#229)

    Fixed

    • Fix DataPipeline resolution in Task (#212)
    • Fixed a bug where the backbone used in summarization was not correctly passed to the postprocess (#296)

    Contributors

    @aniketmaurya @carmocca @edgarriba @ethanwharris @pmeier @tchaton

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.2.3(Apr 17, 2021)

  • 0.2.2(Apr 5, 2021)

    [0.2.2] - 2021-04-05

    Changed

    • Switch to use torchmetrics (#169)
    • Update lightning version to v1.2 (#133)

    Fixed

    • Fixed classification softmax (#169)
    • Don't download data if exists (#157)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Mar 6, 2021)

    [0.2.1] - 2021-3-06

    Added

    • Added RetinaNet & backbones to ObjectDetector Task (#121)
    • Added .csv image loading utils (#116, #117,#118)

    Changed

    • Set inputs as optional (#109)

    Fixed

    • Set minimal requirements (#62)
    • Fixed VGG backbone num_features (#154)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Feb 12, 2021)

    [0.2.0] - 2021-02-12

    Added

    • Added ObjectDetector Task (#56)
    • Added TabNet for tabular classification (#101)
    • Added support for more backbones(mobilnet, vgg, densenet, resnext) (#45)
    • Added backbones for image embedding model (#63)
    • Added SWAV and SimCLR models to imageclassifier + backbone reorg (#68)

    Changed

    • Applied transform in FilePathDataset (#97)
    • Moved classification integration from vision root to folder (#86)

    Fixed

    • Unfreeze default number of workers in datamodule (#57)
    • Fixed wrong label in FilePathDataset (#94)

    Removed

    • Removed densenet161 duplicate in DENSENET_MODELS (#76)
    • Removed redundant num_features arg from Classification model (#88)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Feb 2, 2021)

    Flash Lightning First Release

    Overview:

    Lightning Flash is a collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning This release will introduce 6 Flash Tasks:

    • ImageClassifier
    • TabularClassifier
    • TextClassifier
    • ImageEmbedder
    • SummarizationTask
    • TranslationTask

    Each task can easily be used for both inference and finetuning.

    [0.1.0] - 02/02/2021

    Added

    • Added flash_notebook examples (#9)
    • Added strategy to trainer.finetune with NoFreeze, Freeze, FreezeUnfreeze, UnfreezeMilestones Callbacks (#39)
    • Added SummarizationData, SummarizationTask and TranslationData, TranslationTask (#37)
    • Added ImageEmbedder(#36)

    Contributors @Borda, @carmocca, @justusschock, @SeanNaren, @SkafteNicki, @tchaton, @williamFalcon

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
Owner
Pytorch Lightning
Pytorch Lightning
Code for Graph-to-Tree Learning for Solving Math Word Problems (ACL 2020)

Graph-to-Tree Learning for Solving Math Word Problems PyTorch implementation of Graph based Math Word Problem solver described in our ACL 2020 paper G

Jipeng Zhang 66 Nov 23, 2022
IDRLnet, a Python toolbox for modeling and solving problems through Physics-Informed Neural Network (PINN) systematically.

IDRLnet IDRLnet is a machine learning library on top of PyTorch. Use IDRLnet if you need a machine learning library that solves both forward and inver

IDRL 105 Dec 17, 2022
SNIPS: Solving Noisy Inverse Problems Stochastically

SNIPS: Solving Noisy Inverse Problems Stochastically This repo contains the official implementation for the paper SNIPS: Solving Noisy Inverse Problem

Bahjat Kawar 35 Nov 9, 2022
Solving reinforcement learning tasks which require language and vision

Multimodal Reinforcement Learning JAX implementations of the following multimodal reinforcement learning approaches. Dual-coding Episodic Memory from

Henry Prior 31 Feb 26, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 7, 2023
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 3, 2023
Finetuning Pipeline

KLUE Baseline Korean(한국어) KLUE-baseline contains the baseline code for the Korean Language Understanding Evaluation (KLUE) benchmark. See our paper fo

null 74 Dec 13, 2022
Myia prototyping

Myia Myia is a new differentiable programming language. It aims to support large scale high performance computations (e.g. linear algebra) and their g

Mila 456 Nov 7, 2022
A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

OpenCDA OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular autom

UCLA Mobility Lab 726 Dec 29, 2022
Deep learning library for solving differential equations and more

DeepXDE Voting on whether we should have a Slack channel for discussion. DeepXDE is a library for scientific machine learning. Use DeepXDE if you need

Lu Lu 1.4k Dec 29, 2022
FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data

FAST Aiming at the problems of cumbersome steps and slow download speed of GNSS data, a relatively complete set of integrated multi-source data download terminal software fast is developed. The software contains most of the data sources required in the process of GNSS scientific research and learning. The way of parallel download greatly improves the efficiency of download.

ChangChuntao 23 Dec 31, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 8, 2023
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Baek In-Chang 14 Sep 16, 2022
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods

ADGC: Awesome Deep Graph Clustering ADGC is a collection of state-of-the-art (SOTA), novel deep graph clustering methods (papers, codes and datasets).

yueliu1999 297 Dec 27, 2022