A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.

Overview

Test DeepSource

WebDataset

WebDataset is a PyTorch Dataset (IterableDataset) implementation providing efficient access to datasets stored in POSIX tar archives and uses only sequential/streaming data access. This brings substantial performance advantage in many compute environments, and it is essential for very large scale training.

While WebDataset scales to very large problems, it also works well with smaller datasets and simplifies creation, management, and distribution of training data for deep learning.

WebDataset implements standard PyTorch IterableDataset interface and works with the PyTorch DataLoader. Access to datasets is as simple as:

import webdataset as wds

dataset = wds.WebDataset(url).shuffle(1000).decode("torchrgb").to_tuple("jpg;png", "json")
dataloader = torch.utils.data.DataLoader(dataset, num_workers=4, batch_size=16)

for inputs, outputs in dataloader:
    ...

In that code snippet, url can refer to a local file, a local HTTP server, a cloud storage object, an object on an object store, or even the output of arbitrary command pipelines.

WebDataset fulfills a similar function to Tensorflow's TFRecord/tf.Example classes, but it is much easier to adopt because it does not actually require any kind of data conversion: data is stored in exactly the same format inside tar files as it is on disk, and all preprocessing and data augmentation code remains unchanged.

Documentation

Installation

$ pip install webdataset

For the Github version:

$ pip install git+https://github.com/tmbdev/webdataset.git

Documentation

Introductory Videos

Here are some videos talking about WebDataset and large scale deep learning:

Related Libraries and Software

The AIStore server provides an efficient backend for WebDataset; it functions like a combination of web server, content distribution network, P2P network, and distributed file system. Together, AIStore and WebDataset can serve input data from rotational drives distributed across many servers at the speed of local SSDs to many GPUs, at a fraction of the cost. We can easily achieve hundreds of MBytes/s of I/O per GPU even in large, distributed training jobs.

The tarproc utilities provide command line manipulation and processing of webdatasets and other tar files, including splitting, concatenation, and xargs-like functionality.

The tensorcom library provides fast three-tiered I/O; it can be inserted between AIStore and WebDataset to permit distributed data augmentation and I/O. It is particularly useful when data augmentation requires more CPU than the GPU server has available.

You can find the full PyTorch ImageNet sample code converted to WebDataset at tmbdev/pytorch-imagenet-wds

Comments
  • Unable to reach maximum download speed

    Unable to reach maximum download speed

    I have an image dataset of about 250 1GiB tars on GCS, which I'm loading by using urls = f'pipe:gsutil cat {urls}'.

    I tested the maximum download speed with gsutil -m cp ... and got about 750MiB/s with 96 processes (on a 96 vCPU GCP VM), which would amount to about 15000 images/s. I'm testing the data I/O only, no accelerators.

    However, using wds.MultiDataset (96 workers) with basic decode, to_tuple etc steps (and minimal cropping with albumentations) and batching to 256 results in much slower download & processing, about 2500 images/s (this is actually already really fast but I'd like to try to get it even faster :D ).

    So I'm not bottlenecked by download speed, and CPU utilization for the 96 processes shows <30%, so I'm not bottlenecked by the CPU either...

    I profiled the code with pyinstrument and the bottleneck seems to be somewhere in the batching: Screenshot from 2020-10-09 15-11-46

    I'm working on non-public data, but basically same thing happens with the docs openimages example.

    So given that gsutil -m results in fast download speeds but webdataset doesn't, and both use python (?) multiprocessing, maybe gsutil doing something more efficiently here? I don't really know the multiprocessing etc. libraries well at all so I'm at a bit of a loss here...

    opened by harpone 11
  • syntax error in 0.2.4 release

    syntax error in 0.2.4 release

    Getting a syntax error on .../webdataset/cache.py", line 43 while data := stream.read(chunk_size): Looks like := is a valid syntax from python 3.8. Is webdataset limited to python >= 3.8?

    enhancement 
    opened by czmrand 9
  • npy support

    npy support

    in many cases reading tensor data using numpy.load is many times faster than loading a pickle file. Would it be possible to add it to the basic_handlers?

    enhancement 
    opened by shayanfazeli 7
  • DDP training problem

    DDP training problem

    Hi, how to make sure the WebLoader generated batches is different data when use multi node training? Is there any up-to-date example code using webdataset for DDP training?

    opened by marson666 6
  • Adding option to shuffle shards before splitting to workers, based on current epoch

    Adding option to shuffle shards before splitting to workers, based on current epoch

    What

    This PR adds an option to shuffle the shards before splitting them between nodes and workers. We coordinate this shuffling via a shared random seed, to ensure shards are properly distributed between workers. With epoch-based training loops, the idea is to set this random seed to a different number every epoch, ensuring shards are distributed differently at each iteration. This is completely optional, and should not affect existing workflows in any way. If users chose to use it, the only change they need to make is to simply call dataloader.set_epoch(epoch) at each epoch before iterating.

    Why

    Without this, each worker always sees the same subset of shards at every epoch, when using the standard nodesplitter and split_by_worker. We found this to have a drastic negative impact in performance when training contrastive models, since it limits the possibilities of which datapoints are being contrasted. With this fix, this issue was substantially mitigated.

    How

    The fix is two small modifications. One is to introduce a set_epoch method, which informs the dataloader (and everything that composes it up to the ShardList) of the current epoch. The second one is to use that as a random seed for shuffling shards before they are being split between workers and nodes.

    We have found this to significantly improve performance in our contrastive learning experiments, and would greatly appreciate if you could take a look at this!

    cc @mitchnw

    documentation 
    opened by gabrielilharco 6
  • aws storage

    aws storage

    The google cloud storage buckets work well (for the most part) with this framework. Is there any suggested/preferred methodology to utilize the amazon s3 storage with this? (Putting tar files in that bucket and use the URL to load them in a local machine)

    opened by shayanfazeli 6
  • Add ability to re-create bit-exact Webdatasets

    Add ability to re-create bit-exact Webdatasets

    Currently it is impossible to re-create bit-exact Webdatasets, as each file in the Tar archive has a different mtime. This has slightly annoying implications for file caching and versioning, as you have to deal with changed-but-not-really-changed files all the time.

    My first question is: Is this on purpose? Does mtime encode any valuable information for the user?

    If no, why not set it to a fixed value, e.g. 1970-01-01?

    If yes, why not make it overridable by the user, e.g. by a mtime kwarg?

    Currently, this PR does the latter: It introduces a parameter that allows me to override this value.

    To reproduce

    import webdataset as wds
    
    for f in ["test1.tar", "test2.tar"]:
        with wds.TarWriter(f) as sink:
            sink.write(
                {
                    "__key__": "sample0000",
                    "input.txt": "1",
                    "output.txt": "2",
                },
            )
    
    $ md5sum *.tar
    0f90a5b6ca6f0e423685e23cad872d36  test1.tar                                                                                                                                                                                                                                     
    49e91eaa9c54125897b2286af2757c0e  test2.tar
    $ tar tvf test1.tar 
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.input.txt
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.output.txt
    $ tar tvf test1.tar 
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.input.txt     # the dates are off by microseconds
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.output.txt    # the dates are off by microseconds
    

    Behaviour after this PR

    With this PR it is possible to override the mtime parameter of TarWriter.write(...), and to set it to arbitrary fixed values:

    import webdataset as wds
    
    for f in ["test1.tar", "test2.tar"]:
        with wds.TarWriter(f) as sink:
            sink.write(
                {
                    "__key__": "sample0000",
                    "input.txt": "1",
                    "output.txt": "2",
                },
                mtime=0.0
            )
    

    which creates bit-exact copies of the output file:

    $ md5sum *.tar
    b1de8150b28126256f05943741b2b5ab  test1.tar
    b1de8150b28126256f05943741b2b5ab  test2.tar
    $ tar tvf test1.tar 
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.input.txt
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.output.txt
    $ tar tvf test2.tar 
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.input.txt
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.output.txt
    
    opened by nils-werner 5
  • Remove torch from dependencies

    Remove torch from dependencies

    WebDataset 0.1.76 which I was using before doesn't have torch as a dependency. However, newer versions do.

    I don't think it's a good idea since TarWriter (and maybe other webdataset parts) can be used for creating the dataset in a solely data processing environment. Bringing in a huge (torch 1.10.0 is 1.8 GB!) dependency is a very big overhead.

    What do you guys this?

    opened by danielgafni 5
  • Large size of tar/shard writer output

    Large size of tar/shard writer output

    I have a text dataset with 20G of data and I tried to use webdataset ShardWriter/TarWriter to convert it. Unfortunately, the initial 20Gb data becomes astonishingly large after the conversion, here are the methods I tried and the output size in the disk.

    | Method | Output Size (Gb) | Data stored | |-------------------|------------------|------------------------------------| | ShardWritter pth | 800 | int32 tensors of variable size | | ShardWritter pdy | 300 | Raw text data | | ShardWritter ten | 260 | int32 numpy array of variable size | | ShardWritter npy | 300 | int32 numpy array of variable size | | ShardWritter json | 300 | Raw text data | | tarp cli | 210 | Raw text data |

    Is this the expected behavior? Why a raw data stored in tar files are much larger than the original data?

    opened by huberemanuel 5
  • True random access (i.e. Dataset vs. IterableDataset)

    True random access (i.e. Dataset vs. IterableDataset)

    Hi, this is a really neat library! However it would be nice to have a simpler interface to TAR files that allows random access, instead of enforcing sequential access. Let me explain why.

    The most common use-case for academic labs such as mine are not terabyte-scale remote datasets where sharding is common, but several gigabytes datasets with a few million small images.

    So, a dataset fits in a single machine; but due to automatic scheduling in a cluster we cannot have all datasets in all machines. In this context, the easiest solution is to copy the dataset to the local machine at the start of training. However, millions of small files really strain the filesystem (both for copying and accessing during training). So copying and reading from a single TAR file would be ideal -- and WebDataset seems (on the surface) to do this well.

    But the constraints of the IterableDataset are maybe a step too far. We now have to make decisions about how many shards to use, sizes of rolling buffers for shuffling, etc. This adds a ton of friction, and the uneasy feeling that now the samples are not sufficiently IID for training if you make a wrong decision. Compare this to the ease of training with tons of small images in a filesystem.

    I was trying to use WebDataset and get colleagues to adopt it, but this is a big wall for adoption. Uncompressed TAR files allow random access. Could this not be a simpler entry-point for WebDataset? A user who wants to scale things up would find it easier to adapt then to the IterableDataset, but I think that many users would be perfectly happy with the random access version, which is much less restrictive.

    enhancement 
    opened by jotaf98 5
  • Serious drop in data loading speed observed

    Serious drop in data loading speed observed

    Has anyone else noticed download issues with webdataset, about 10x drop in data loading speed? I'm observing slowdowns with everything (on GCS at least), for example also the @tmbdev 's openimages dataset...

    Please see the gist here. Should be ready to run.

    In my earlier benchmarks I was able to get about 2000 img/s with 8 processes with the above script (50 minibatches of size 256 in about 6.5 s), but now I'm getting about 400 img/s top.

    I'm on a GCP VM. Tried downgrading various libraries, spun up a VM from an image from last July etc... gsutil multiprocessing downloads from GCS are still very fast (~570MiB/s => 11400 openimages img/s)

    I'm totally confused what's going on!

    opened by harpone 5
  • Periods in base filename interpreted as extensions

    Periods in base filename interpreted as extensions

    It appears that periods in the base part of the filename cause issues. For example the filename ./235342 Track 2.0 (Clean Version).mp3 leads to an unexpected key of 0 (clean version).mp3. This caused sporadic downstream issues like:

    ValueError: didn't find ['mp3'] in ['__key__', '__url__', '0 (clean version).json', '0 (clean version).mp3']

    Looking into the code, it appears that this is by design in order to support multiple extensions like ".seg.jpg", but it would be nice to mention this in the documentation to avoid surprises.

    opened by iceboundflame 0
  • gopen_curl fails on windows

    gopen_curl fails on windows

    Loading a tarfile with a url as a webdataset fails because of a wrongly formatted curl command for Windows.

    CMD is: https://github.com/webdataset/webdataset/blob/main/webdataset/gopen.py#L195 Single quotes around the url lead to improper interpretation of urls that contain the & symbol on a windows machine.

    enhancement 
    opened by pratikgujjar 1
  • Checkpoint support

    Checkpoint support

    Hello,

    I am wondering if there is any support for checkpoint in WebDataset. Our usecase is like this, if we have a series of samples[s_1, s_2, s_3... s_n]. If it is crashed, would webdataset can support this:

    • When the training is checkpointed, the index from which the webdataset sample is loading is also checkpointed.
    • When resuming from a checkpoint, WebDataset can start loading sampels from the checkpointed index.
    opened by yncxcw 1
  • Updated syntax for multinode usage

    Updated syntax for multinode usage

    Hi, I'm working on Google Colab and trying to setup minimal example of multi-core pytorch training with webdataset using data on GCP Buckets. Specifically I've got a bucket with my training shards and another bucket with test shards.

    For colab setup, I'm using the suggested wheels from the pytorch XLA notebook for multi-core and adding webdataset to the pip installs. I see I've got webdataset 0.2.31 from pip show webdataset in the console.

    I tried to start from (https://webdataset.github.io/webdataset/multinode/) with this code in the map function:

    urls = list(braceexpand.braceexpand(FLAGS['train_urls']))
    dataset = (wds.WebDataset(urls)
                 .to_tuple('x_data.npy', 'y_data.npy')
                 .map(preprocess)
                 .batched(50))
    

    but I'm seeing some strange behavior that I don't understand. Specifically, the docs indicates that a default nodesplitter and split_by_worker should be in effect:

    urls = list(braceexpand.braceexpand("dataset-{000000..000999}.tar"))
    dataset = wds.ShardList(urls, splitter=wds.split_by_worker, nodesplitter=wds.split_by_node, shuffle=False)
    dataset = wds.Processor(dataset, wds.url_opener)
    dataset = wds.Processor(dataset, wds.tar_file_expander)
    dataset = wds.Processor(dataset, wds.group_by_keys)
    

    What I see from the following however is that every core processes every file with the following in my map function. (Next step would be to investigate worker splitting once we've got shards distributed across cores)

    loader = DataLoader(dataset, num_workers=FLAGS['num_workers']) 
    for batch, (x,y) in enumerate(loader):
      print(f'core:{rank} batch:{batch}: x_min: {torch.min(x)} x_max: {torch.max(x)} len:{x.size()}')
    

    I saw a related post with a suggestion to use v2 branch (#https://github.com/webdataset/webdataset/issues/138) but (if possible) I'd like to have my dependency be the main release that's going to come in under webdataset with pip.

    I also attempted to expand on the wds.Processor approach but that doesn't seem to be available in this version maybe? High level just looking for best advice on implementing a pretty vanilla version of this with shard shuffling and sample shuffling. I'm not sure how shard shuffling across cores is coordinated (do you use a parallel loader?) so thoughts on that would be great too. If a minimum working example notebook would improve/clarify this question that's also something I'm happy to provide.

    opened by matt-gss 0
  • when converting dataset to wds, data is getting larger.

    when converting dataset to wds, data is getting larger.

    this is my code:

    with wds.ShardWriter(pattern.format(proc_id), maxsize=int(1e10), maxcount=int(1000000)) as sink:
    
        for i in tqdm(idx_list):
            text, image = old_ds[i]
            key = uuid1().__str__()
            sink.write({
                "__key__": key,  # uuid
                "input.jpg": image,  # PIL image
                "input.txt": text,  # a string
            })
    

    My dataset is composed of 880k image-text pairs and it was 202G in LMDB format(same as small single image files on the disk, image stored as base64) When converting to wds, it became 474G

    I saw the issues before, but the dataset still getting larger when trying tar.gz ..

    opened by yli1994 4
  • AttributeError: module 'webdataset' has no attribute 'ShardList'

    AttributeError: module 'webdataset' has no attribute 'ShardList'

    From the documentation: https://webdataset.github.io/webdataset/multinode/#splitting-shards-across-nodes-and-workers

    urls = list(braceexpand.braceexpand("dataset-{000000..000999}.tar"))
    dataset = wds.ShardList(urls, splitter=wds.split_by_worker, nodesplitter=wds.split_by_node, shuffle=False)
    dataset = wds.Processor(dataset, wds.url_opener)
    dataset = wds.Processor(dataset, wds.tar_file_expander)
    dataset = wds.Processor(dataset, wds.group_by_keys)
    

    Running this results in

    AttributeError: module 'webdataset' has no attribute 'ShardList'
    
    documentation 
    opened by drscotthawley 2
Releases(0.2.31)
Owner
High Performance I/O for Large Scale Deep Learning
null
A tiny, friendly, strong baseline code for Person-reID (based on pytorch).

Pytorch ReID Strong, Small, Friendly A tiny, friendly, strong baseline code for Person-reID (based on pytorch). Strong. It is consistent with the new

Zhedong Zheng 3.5k Jan 8, 2023
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 8, 2023
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 3, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 3, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 2.8k Feb 12, 2021
Deep generative modeling for time-stamped heterogeneous data, enabling high-fidelity models for a large variety of spatio-temporal domains.

Neural Spatio-Temporal Point Processes [arxiv] Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel Abstract. We propose a new class of parameterizations

Facebook Research 75 Dec 19, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
A Fast and Stable GAN for Small and High Resolution Imagesets - pytorch

A Fast and Stable GAN for Small and High Resolution Imagesets - pytorch The official pytorch implementation of the paper "Towards Faster and Stabilize

Bingchen Liu 455 Jan 8, 2023
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 8, 2023
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 1, 2023
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 11.9k Feb 13, 2021
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 2, 2023
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 4, 2023
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 8, 2023
Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data

Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data arXiv This is the code base for weakly supervised NER. We provide a

Amazon 92 Jan 4, 2023
ML-Ensemble – high performance ensemble learning

A Python library for high performance ensemble learning ML-Ensemble combines a Scikit-learn high-level API with a low-level computational graph framew

Sebastian Flennerhag 764 Dec 31, 2022