Simple tools for logging and visualizing, loading and training

Overview

TNT

TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is designed to enable rapid iteration with any model or training regimen.

travis Documentation Status

Installation

TNT can be installed with pip. To do so, run:

pip install torchnet

If you run into issues, make sure that Pytorch is installed first.

You can also install the latest verstion from master. Just run:

pip install git+https://github.com/pytorch/tnt.git@master

To update to the latest version from master:

pip install --upgrade git+https://github.com/pytorch/tnt.git@master

About

TNT (imported as torchnet) is a framework for PyTorch which provides a set of abstractions for PyTorch aiming at encouraging code re-use as well as encouraging modular programming. It provides powerful dataloading, logging, and visualization utilities.

The project was inspired by TorchNet, and legend says that it stood for “TorchNetTwo”. Since the deprecation of TorchNet TNT has developed on its own.

For example, TNT provides simple methods to record model preformance in the torchnet.meter module and to log them to Visdom (or in the future, TensorboardX) with the torchnet.logging module.

In the future, TNT will also provide strong support for multi-task learning and transfer learning applications. It currently supports joint training data loading through torchnet.utils.MultiTaskDataLoader.

Most of the modules support NumPy arrays as well as PyTorch tensors on input, and so could potentially be used with other frameworks.

Getting Started

See some of the examples in https://github.com/pytorch/examples. We would like to include some walkthroughs in the docs (contributions welcome!).

[LEGACY] Differences with lua version

What's been ported so far:

  • Datasets:
    • BatchDataset
    • ListDataset
    • ResampleDataset
    • ShuffleDataset
    • TensorDataset [new]
    • TransformDataset
  • Meters:
    • APMeter
    • mAPMeter
    • AverageValueMeter
    • AUCMeter
    • ClassErrorMeter
    • ConfusionMeter
    • MovingAverageValueMeter
    • MSEMeter
    • TimeMeter
  • Engines:
    • Engine
  • Logger
    • Logger
    • VisdomLogger
    • MeterLogger [new, easy to plot multi-meter via Visdom]

Any dataset can now be plugged into torch.utils.DataLoader, or called .parallel(num_workers=8) to utilize multiprocessing.

Comments
  • Cleanup PT-D imports

    Cleanup PT-D imports

    Summary: The flow logic around torch.dist imports results in large number of pyre errors; would be preferable to just raise on importing as opposed to silently fail to import bindings.

    Cons: Some percentage (MacOS?) of users may have notebooks that imports pytorch.distributed, although would think small, since any attempt to call parts of the library would just fail...

    fb: TODO: assuming ok, will remove the 10's-100's of unused pyre ignores no longer required.

    Differential Revision: D39842273

    cla signed fb-exported Reverted 
    opened by dstaay-fb 10
  • Add more datasets

    Add more datasets

    Added:

    • SplitDataset
    • ConcatDataset

    Made load an optional parameter in ListDataset so that it can be used as TableDataset.

    Added seed to resample in ShuffleDataset

    I don't understand why elem_list has to torch.LongTensor if tensor in ListDataset (see here). It seems pointless.

    Sasank.

    opened by chsasank 8
  • Update the train() API parameter to take a state

    Update the train() API parameter to take a state

    Summary: Allow the states to be set up outside of the train() function, provides better flexibility. However, the state becomes an opaque object that might be modified accidentally.

    Suggest, we should move dataloader out of the state. and make the state immutable.

    Reviewed By: ananthsub

    Differential Revision: D40233501

    cla signed fb-exported 
    opened by zzzwen 7
  • Fix tests for CI

    Fix tests for CI

    Summary: Seeing errors like this in CI on trunk:

        from torchtnt.tests.runner.utils import DummyPredictUnit, generate_random_dataloader
    E   ModuleNotFoundError: No module named 'torchtnt.tests'
    

    moving the test utils to resolve it

    Differential Revision: D38774677

    cla signed fb-exported 
    opened by ananthsub 6
  • I found it would be failed when initialize MeterLogger server with `http://` head.

    I found it would be failed when initialize MeterLogger server with `http://` head.

    i found it would be failed when initialize MeterLogger server with http:// head.

    >>> a = MeterLogger(server="http://localhost", port=8097, env='hello')
    >>> b = torch.Tensor([1])
    >>> a.updateLoss(b, 'loss')
    >>> a.resetMeter(mode='Train', iepoch=1)
    

    Error message here

    Exception in user code:
    ------------------------------------------------------------
    Traceback (most recent call last):
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/connection.py", line 141, in _new_conn
        (self.host, self.port), self.timeout, **extra_kw)
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/util/connection.py", line 60, in create_connection
        for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
      File "/usr/lib/python3.6/socket.py", line 745, in getaddrinfo
        for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
    socket.gaierror: [Errno -2] Name or service not known
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
        chunked=chunked)
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 357, in _make_request
        conn.request(method, url, **httplib_request_kw)
      File "/usr/lib/python3.6/http/client.py", line 1239, in request
        self._send_request(method, url, body, headers, encode_chunked)
      File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
        self.endheaders(body, encode_chunked=encode_chunked)
      File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
        self._send_output(message_body, encode_chunked=encode_chunked)
      File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
        self.send(msg)
      File "/usr/lib/python3.6/http/client.py", line 964, in send
        self.connect()
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/connection.py", line 166, in connect
        conn = self._new_conn()
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/connection.py", line 150, in _new_conn
        self, "Failed to establish a new connection: %s" % e)
    urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f6cc08ce198>: Failed to establish a new connection: [Errno -2] Name or service not known
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/txk/.local/lib/python3.6/site-packages/requests/adapters.py", line 440, in send
        timeout=timeout
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 639, in urlopen
        _stacktrace=sys.exc_info()[2])
      File "/home/txk/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 388, in increment
        raise MaxRetryError(_pool, url, error or ResponseError(cause))
    urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //localhost:8097/events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6cc08ce198>: Failed to establish a new connection: [Errno -2] Name or service not known',))
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/txk/.local/lib/python3.6/site-packages/visdom/__init__.py", line 261, in _send
        data=json.dumps(msg),
      File "/home/txk/.local/lib/python3.6/site-packages/requests/api.py", line 112, in post
        return request('post', url, data=data, json=json, **kwargs)
      File "/home/txk/.local/lib/python3.6/site-packages/requests/api.py", line 58, in request
        return session.request(method=method, url=url, **kwargs)
      File "/home/txk/.local/lib/python3.6/site-packages/requests/sessions.py", line 508, in request
        resp = self.send(prep, **send_kwargs)
      File "/home/txk/.local/lib/python3.6/site-packages/requests/sessions.py", line 618, in send
        r = adapter.send(request, **kwargs)
      File "/home/txk/.local/lib/python3.6/site-packages/requests/adapters.py", line 508, in send
        raise ConnectionError(e, request=request)
    requests.exceptions.ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //localhost:8097/events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6cc08ce198>: Failed to establish a new connection: [Errno -2] Name or service not known',))
    
    
    opened by TuXiaokang 6
  • Some code format error exists in `meterlogger.py`

    Some code format error exists in `meterlogger.py`

    error message like this

    >>> import torchnet
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python3.6/site-packages/torchnet/__init__.py", line 5, in <module>
        from . import logger
      File "/usr/local/lib/python3.6/site-packages/torchnet/logger/__init__.py", line 2, in <module>
        from .meterlogger import MeterLogger
      File "/usr/local/lib/python3.6/site-packages/torchnet/logger/meterlogger.py", line 16
        self.nclass = nclass
                           ^
    TabError: inconsistent use of tabs and spaces in indentation
    
    opened by TuXiaokang 6
  • Add Visdom logging to TNT

    Add Visdom logging to TNT

    Added the ability to log to Visdom (like Tensorboard, but better!). It is a port of this repo to TNT.

    screen shot 2017-08-03 at 3 27 10 pm

    This PR adds TNT support for the existing Visdom plots and includes an example that uses it for MNIST (above).

    opened by alexsax 6
  • how to use conda install torchnet?

    how to use conda install torchnet?

    In my ubuntu sever, pip is ok, but torchnet is only in 'pip list' and not in 'conda list', so when i run xx.py, error is no module named 'torchnet'. So i want to ask if i can use conda to install torchnet and how to install it? could anyone help me? thanks!

    opened by qilong-zhang 5
  • Replace loss[0] with loss.item() due to deprecation

    Replace loss[0] with loss.item() due to deprecation

    When using tnt's meterlogger a deprecation warning by PyTorch does appear:

    UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.
    5. Use tensor.item() to convert a 0-dim tensor to a Python number
    

    Issued by this line.

    Replacing loss[0] with loss.item() should fix that without any further consequences

    opened by sauercrowd 5
  • Generic tnt.Engine?

    Generic tnt.Engine?

    Shouldn't the tnt.engine.Engine be changed to tnt.engine.SGDEngine and tnt.engine.Engine be a generic engine?

    This is similar to torchnet which allowed extending engines to make meta-engines like train_val_engine.

    opened by karandwivedi42 5
  • Update LRScheduler instance checks

    Update LRScheduler instance checks

    Summary: After https://github.com/pytorch/pytorch/pull/88503 , these isinstance checks no longer pass. Example: https://github.com/pytorch/tnt/actions/runs/3434539858/jobs/5725945075

    Differential Revision: D41177335

    cla signed fb-exported 
    opened by ananthsub 4
  • Create torchtnt.utils.lr_scheduler.TLRScheduler

    Create torchtnt.utils.lr_scheduler.TLRScheduler

    Summary:

    Introduce torchtnt.utils.lr_scheduler.TLRScheduler to manage compatibility of torch with expose of LRScheduler https://github.com/pytorch/pytorch/pull/88503

    It seems that issue persist still with 1.13.1 (see #285). Replace the get_version logic with try/except. _LRscheduler.

    Fixes #285 https://github.com/pytorch/tnt/issues/285

    cla signed 
    opened by dmtrs 7
  • Issue with torch 1.13

    Issue with torch 1.13

    🐛 Describe the bug

    Can not import torchtnt.framework due to AttributeError.

    Traceback (most recent call last):
      File "/Users/foo/app/runner/trainer/__init__.py", line 4, in <module>
        import torchtnt.framework
      File "/Users/foo/Library/Caches/pypoetry/virtualenvs/sp-mcom8aNU-py3.9/lib/python3.9/site-packages/torchtnt/framework/__init__.py", line 7, in <module>
        from .auto_unit import AutoUnit
      File "/Users/foo/Library/Caches/pypoetry/virtualenvs/sp-mcom8aNU-py3.9/lib/python3.9/site-packages/torchtnt/framework/auto_unit.py", line 23, in <module>
        from torchtnt.framework.unit import EvalUnit, PredictUnit, TrainUnit
      File "/Users/foo/Library/Caches/pypoetry/virtualenvs/sp-mcom8aNU-py3.9/lib/python3.9/site-packages/torchtnt/framework/unit.py", line 24, in <module>
        TLRScheduler = torch.optim.lr_scheduler.LRScheduler
    AttributeError: module 'torch.optim.lr_scheduler' has no attribute 'LRScheduler'
    

    From torch documentation it does not seem to have a torch.optim.lr_scheduler in stable version. https://pytorch.org/docs/stable/optim.html. Please also see BC-breaking change after 1.10 that may affect optimizer and lr step.

    From requirements.txt it does not seem this is locked in particular torch version.

    What is current working supported torch version for torchtnt=0.0.4? Is there plans to have particular supported torch versions?

    Versions

    % poetry run python collect_env.py                                                                                                                                                                      ✹ ✭
    Collecting environment information...
    PyTorch version: 1.13.1
    Is debug build: False
    CUDA used to build PyTorch: None
    ROCM used to build PyTorch: N/A
    
    OS: macOS 10.14.6 (x86_64)
    GCC version: Could not collect
    Clang version: Could not collect
    CMake version: Could not collect
    Libc version: N/A
    
    Python version: 3.9.13 (main, Sep  3 2022, 22:36:35)  [Clang 10.0.1 (clang-1001.0.46.4)] (64-bit runtime)
    Python platform: macOS-10.14.6-x86_64-i386-64bit
    Is CUDA available: False
    CUDA runtime version: No CUDA
    CUDA_MODULE_LOADING set to: N/A
    GPU models and configuration: No CUDA
    Nvidia driver version: No CUDA
    cuDNN version: No CUDA
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    Is XNNPACK available: True
    
    Versions of relevant libraries:
    [pip3] mypy-extensions==0.4.3
    [pip3] numpy==1.23.5
    [pip3] torch==1.13.1
    [pip3] torchtnt==0.0.4
    [conda] Could not collect
    
    opened by dmtrs 1
  • Support recreating dataloaders during loop

    Support recreating dataloaders during loop

    Summary: Sometimes users need to recreate the dataloaders at the start of the epoch during the overall training loop. To support this, we allow users to register a creation function when creating the state for training or fitting. We pass the state as an argument since users may want to use progress information such as the progress counters to reinitialize the dataloader.

    We don't add this support for evaluate or predict since those functions iterate through the corresponding dataloader just once.

    For fit, this allows flexibility to reload training & evaluation dataloaders independently during if desired

    Differential Revision: D40539580

    cla signed fb-exported 
    opened by ananthsub 3
Releases(0.0.5.1)
Owner
null
Calling Julia from Python - an experiment on data loading

Calling Julia from Python - an experiment on data loading See the slides. TLDR After reading Patrick's blog post, we decided to try to replace C++ wit

Abel Siqueira 8 Jun 7, 2022
Python code for loading the Aschaffenburg Pose Dataset.

Aschaffenburg Pose Dataset (APD) This repository contains Python code for loading and filtering the Aschaffenburg Pose Dataset. The dataset itself and

null 1 Nov 26, 2021
Pytorch implementation of MLP-Mixer with loading pre-trained models.

MLP-Mixer-Pytorch PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision with the function of loading official ImageNet pre-trained p

Qiushi Yang 2 Sep 29, 2022
dyld_shared_cache processing / Single-Image loading for BinaryNinja

Dyld Shared Cache Parser Author: cynder (kat) Dyld Shared Cache Support for BinaryNinja Without any of the fuss of requiring manually loading several

cynder 76 Dec 28, 2022
A module for solving and visualizing Schrödinger equation.

qmsolve This is an attempt at making a solid, easy to use solver, capable of solving and visualize the Schrödinger equation for multiple particles, an

null 506 Dec 28, 2022
Your interactive network visualizing dashboard

Your interactive network visualizing dashboard Documentation: Here What is Jaal Jaal is a python based interactive network visualizing tool built usin

Mohit 177 Jan 4, 2023
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 9, 2023
Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral

Improving Contrastive Learning by Visualizing Feature Transformation This project hosts the codes, models and visualization tools for the paper: Impro

Bingchen Zhao 83 Dec 15, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 8, 2023
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Implements the training, testing and editing tools for "Pluralistic Image Completion"

Pluralistic Image Completion ArXiv | Project Page | Online Demo | Video(demo) This repository implements the training, testing and editing tools for "

Chuanxia Zheng 615 Dec 8, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools

Hugging Face Optimum ?? Optimum is an extension of ?? Transformers, providing a set of performance optimization tools enabling maximum efficiency to t

Hugging Face 842 Dec 30, 2022
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 6, 2023
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 5, 2022
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

null 78 Dec 27, 2022
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

UC Berkeley RISE 178 Jan 5, 2023