Functional deep learning

Related tags

Deep Learning padl
Overview

PADLPADL

PyPI version License PyPI - Python Version GitHub Issues Tests codecov LF1 on Twitter

Pipeline abstractions for deep learning.


Full documentation here: https://lf1-io.github.io/padl/

PADL:

  • is a pipeline builder for PyTorch.
  • may be used with all of the great PyTorch functionality you're used to for writing layers.
  • allows users to build pre-processing, forward passes, loss functions and post-processing into the pipeline.
  • models may have arbitrary topologies and make use of arbitrary packages from the python ecosystem.
  • allows for converting standard functions to PADL components using a single keyword transform.

PADL was developed at LF1, an AI innovation lab based in Berlin, Germany.

Getting Started

Installation

pip install padl

PADL currently supports python 3.7, 3.8 and 3.9.

Python version >= 3.8 is preferred because creating and loading transforms (not execution) can be slower in 3.7.

Your first PADL program

from padl import transform, batch, unbatch
import torch
from torch import nn
nn = transform(nn)

@transform
def prepare(x):
    return torch.tensor(x)

@transform
def post(x):
    return x.topk(1)[1].item()

my_pipeline = prepare >> batch >> nn.Linear(10, 20) >> unbatch >> post

Resources

Contributing

Code of conduct: https://github.com/lf1-io/padl/blob/main/CODE_OF_CONDUCT.md

If your interested in contributing to PADL please look at the current issues: https://github.com/lf1-io/padl/issues

Licensing

PADL is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Comments
  • fix for split problem

    fix for split problem

    • [x] Failing test for IfInStage when using _pd_get_splits

    • [x] TODO Get this (a / b / c) >> (x + y + z) example to work. There is a test for this under TestRollout

    • [x] TODO Get this (a / b) >> c example to work. There is a test for this under TestParallel

    • [x] Failing test for TestTransformDeviceCheck. Not sure if it's possible to get error WrongDeviceError to raise

    • This solution reopens issue #27 since _pd_get_splits() relies on Transforms to be in preprocess by default. An alternative solution to this issue could be to directly use transform.pd_call_transform() when inspecting individual components of the transform. This would also avoid double batching. @wuhu @eramdiaz what do you think?

    opened by wuhu 20
  • Assignment breaks the saving

    Assignment breaks the saving

    🐞 Bug

    Assignment breaks the saving

    Works BUT

    • pd_to is not written in the transform.py file
    • visit_Attribute is not triggered.
    DEVICE = 'cpu'
    
    
    @transform
    def times_two(x):
        return x * 2
    
    times_two.pd_to(DEVICE)
    
    save(times_two, 'm.padl')
    m = load('m.padl')
    

    content of transform.py

    from padl import transform
    
    
    @transform
    def times_two(x):
        return x * 2
    
    
    _pd_main = times_two
    

    Not working code

    • triggers visit_Attribute 6 times.
    • creation of times_two is not written on the file.
    DEVICE = 'cpu'
    
    
    @transform
    def times_two(x):
        return x * 2
    
    times_two = times_two.pd_to(DEVICE)
    
    save(times_two, 'm.padl')
    m = load('m.padl')
    

    content of transform.py

    DEVICE = 'cpu'
    times_two = times_two.pd_to(DEVICE)
    _pd_main = times_two
    

    Works BUT

    • forgets about device assigment.
    • forgets about new_name
    • does not trigger visit_Attribute
    import padl
    from padl import transform, save, load
    
    DEVICE = 'cpu'
    
    @transform
    def times_two(x):
        return x * 2
    
    new_name = times_two.pd_to(DEVICE)
    
    save(new_name, 'm.padl', force_overwrite=True)
    m2 = load('m.padl')
    

    This works. but forgets about new_name and new_name assigment. Below is the code written on transform.py.

    from padl import transform
    
    
    @transform
    def times_two(x):
        return x * 2
    
    
    _pd_main = times_two
    

    bug saver 
    opened by jasonkhadka 14
  • Add compose parallel rollout

    Add compose parallel rollout

    • Added compose, parallel, rollout, namedrollout, namedparallel
    • implemented __call__ for these
    • renamed MetaTransforms to CompoundTransforms
    • resolved inheritance issue - all inherit from CompoundTransforms
    • unresolved - module and stack arguments for __init__
    opened by jasonkhadka 14
  • Saving and loading issues when transforms are attributes in a class

    Saving and loading issues when transforms are attributes in a class

    🐞 Bug

    When a Transform is an attribute of a class and it saved inside it, an OSError: source code not available error is got when loading it.

    1. Code:
    @transform
    def times_two(x):
        return x * 2
    
    @transform
    class MyClass:
        def __init__(self, a): 
            self.a = a
        def __call__(self, x):  
            return self.a + x
    
    @transform
    class Test:
        def __init__(self):
            self.t2 = MyClass(6)
            self.t2.pd_save('t2.padl', force_overwrite=True)
            times_two.pd_save('t4.padl', force_overwrite=True)
    
    t = Test()
    t4 = load('t4.padl') # This works
    t2 = load('t2.padl')
    
    1. Error: Screenshot from 2021-11-15 18-53-42
    bug 
    opened by eramdiaz 13
  • Define `flatten` for `NamedRollout` and `NamedParallel`

    Define `flatten` for `NamedRollout` and `NamedParallel`

    It is unclear how to flatten NamedRollout and NamedParallel and should NamedRollout even be able to be flattened? For example, in the following case, what should be the desired flattened transform be? NamedRollout(t1=t1, t2=(NamedRollout(t2a=t2a,t2b=t2b)) = ? Is it = NamedRollout(t1=t1, t2a=t2a, t2b=t2b)? Does this make sense, and what about the key t2?

    opened by jasonkhadka 9
  • Edge case in decorating a Module instance

    Edge case in decorating a Module instance

    from torchvision.models.densenet import DenseNet
    from padl import transform
    
    
    @transform
    class ImageClassifier(DenseNet):
        def __init__(self):
            super(ImageClassifier, self).__init__(48, (6, 12, 36, 24), 96)
    
        def load_state_dict(self, state_dict, strict=True):
            # '.'s are no longer allowed in module names, but previous _DenseLayer
            # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
            # They are also in the checkpoints in model_urls. This pattern is used
            # to find such keys.
            # Credit - https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py#def _load_state_dict()
            import re
            pattern = re.compile(r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
    
            for key in list(state_dict.keys()):
                res = pattern.match(key)
                if res:
                    new_key = res.group(1) + res.group(2)
                    state_dict[new_key] = state_dict[key]
                    del state_dict[key]
    
            return super(ImageClassifier, self).load_state_dict(state_dict, strict)
    

    This breaks with message:

    In [2]: layer
    Out[2]: ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    ~/lf1-io/padl/.venv/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
        700                 type_pprinters=self.type_printers,
        701                 deferred_pprinters=self.deferred_printers)
    --> 702             printer.pretty(obj)
        703             printer.flush()
        704             return stream.getvalue()
    
    ~/lf1-io/padl/.venv/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
        389                             meth = cls._repr_pretty_
        390                             if callable(meth):
    --> 391                                 return meth(obj, self, cycle)
        392                         if cls is not object \
        393                                 and callable(cls.__dict__.get('__repr__')):
    
    ~/lf1-io/padl/.venv/lib/python3.8/site-packages/padl/transforms.py in _repr_pretty_(self, p, cycle)
        399             title = make_bold(title)
        400         top_message = title + ':' + '\n\n'
    --> 401         bottom_message = textwrap.indent(self._pd_longrepr(), '   ')
        402         p.text(top_message + bottom_message if not cycle else '...')
        403
    
    ~/lf1-io/padl/.venv/lib/python3.8/site-packages/padl/transforms.py in _pd_longrepr(self)
        956
        957     def _pd_longrepr(self) -> str:
    --> 958         return torch.nn.Module.__repr__(self)
        959
        960
    
    ~/lf1-io/padl/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py in __repr__(self)
       1810             extra_lines = extra_repr.split('\n')
       1811         child_lines = []
    -> 1812         for key, module in self._modules.items():
       1813             mod_str = repr(module)
       1814             mod_str = _addindent(mod_str, 2)
    
    ~/lf1-io/padl/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
       1175             if name in modules:
       1176                 return modules[name]
    -> 1177         raise AttributeError("'{}' object has no attribute '{}'".format(
       1178             type(self).__name__, name))
       1179
    
    AttributeError: 'ImageClassifier' object has no attribute '_modules'
    

    The error doesn't occur with t = transform(ImageClassifier()) with the decorator removed.

    bug 
    opened by blythed 8
  • 403 variables

    403 variables

    Fixes #403

    From new docs:

    Using padl.param one can specify parameters which can then be overridden when loading files.

    from padl import transform, param, load
    
    x = param(1, name='x', description='add this much')
    
    @transform
    def f(y):
      return x + f
    
    save(f, 'f.padl')
    

    When loading a Transform that uses a parameter, one can specify the value of the parameter as a keyword argument to padl.load:

    >>> from padl import load
    >>> f = load('f.padl', x=1000)
    >>> f(1)
    1001
    

    Params usually have a default value. One can force the user to provide them with use_default=False.

    opened by wuhu 7
  • Loading / saving problems - collection with minimal examples

    Loading / saving problems - collection with minimal examples

    1. using the wrapper with functions - but not as a decorator :heavy_check_mark:

    def simple_func(x):
        return x
    
    print(trans(simple_func).lf_dumps())
    

    gives

    def simple_func(x):
        return x
    
    
    _lf_main = simple_func
    

    2. symbols created using with :heavy_check_mark: (still doesn't work, but added an exception)

    [with not supported by thingfinder]

    with open('README.md') as f:
        o = f.read()
    @trans
    def bla(x):
        return o + f
    print(bla.lf_dumps())
    

    -> f not found

    3. star import :heavy_check_mark:

    [the symbols from the start are not to be found in the source]

    from lf import *
    
    @trans
    def f(x):
        return x 
    

    -> trans not found

    4. transforms that are created inside class methods and depend on self :heavy_multiplication_x: won't fix for now, should be documented

    [things from self don't exist in the scope]

    class X:
        def __init__(self, y):
            self.y = y
            
        def makemeatransform(self):
            @trans
            def f(x):
                return x + self.y
            return f
    print(X(1).makemeatransform().lf_dumps())
    

    -> self not found

    opened by wuhu 6
  • Grouping of rolled out transforms is non intuitive

    Grouping of rolled out transforms is non intuitive

    Something that is not very straightforward to do in aleph when you design your models is giving certain structures to the inputs/outputs at each level.

    For example. I may have a rollout like t_1 + t_2 + t_3 before t_4 / t_5, and I may want to feed t_4 with the outputs of t_1 and t_2, and feed t_5 with the output of t_3.

    Right now we would do something like this:

    t_1 + t_2 + t_3
    >> lf.trans(lambda x: ( (x[0], x[1]), x[2]) )
    >> t_4 / t_5
    

    This current solution is not very pretty, and not very intuitive.

    Other current solutions in aleph is to use the flatten argument of the Transforms, making a transform combining t_1 and t_2 and setting flatten to False, like t_12 = Rollout([t_1, t_2], flatten=False)

    Rollout([t_1+ t_2, t_3], flatten=False) 
    >> t_4 / t_5
    

    or, in shorter notation, combining the Transforms and giving a name, like t_12 = t_1 + t_2 -'combined'.

    t_12  = t_1 + t_2 - 'combined'
    t_12 + t_3
    >> t_4 / t_5
    

    However, again both of these solutions aren't immediately intuitive.

    A nice solution for this would be writing the rollout as (t_1 + t_2) + t_3, which would ideally produce a ((o_1, o_2), o_3) output. However, this is impossible in python because internally the associative property is enforced for the + operator. For example, (t_1_ + t_2) + t_3 must give the same result as t_1 + t_2 + t_3 = (t_1 + (t_2 + t_3)), so this solution will not work.

    What if instead we used brackets instead of paranthesis. Then we could write:

    [t_1 + t_2] + t_3
    >> t_4 / t_5
    

    This would be an ideal way for users to easily organize their thoughts and make the pipelines much cleaner. I think this would make the package much more intuitive and user-friendly.

    opened by eramdiaz 6
  • fixed output arrow

    fixed output arrow

    Description

    Fixes #258

    t = plus_out3 >> m1 / m2 / m2>> plus
    
    Compose - "t":
    
          β”‚
          β–Ό x
       0: plus_out3         
          ││└───────────────────────────────────────┐
          │└───────────────────┐                    β”‚
          β”‚                    β”‚                    β”‚
          β–Ό arg                β–Ό arg                β–Ό arg
       1: multiply(factor=2) / multiply(factor=2) / multiply(factor=2)
          β”‚____________________β”‚                    β”‚
          β”‚_________________________________________β”‚ 
          β”‚
          β–Ό a
       2: plus  
    
    

    previous output:

    Compose - "t":
    
          β”‚
          β–Ό x
       0: plus_out3         
          ││└───────────────────────────────────────┐
          │└───────────────────┐                    β”‚
          β”‚                    β”‚                    β”‚
          β–Ό arg                β–Ό arg                β–Ό arg
       1: multiply(factor=2) / multiply(factor=2) / multiply(factor=2)
          β”‚
          β–Ό a
       2: plus  
    
    opened by jasonkhadka 5
  • Weird identities are being added to composes

    Weird identities are being added to composes

    m = (
        identity + identity
        >> ~ identity
        >> batch
        >> identity
    )
    m
    
    Compose - "m":
    
          └─────────────────┐
          β”‚                 β”‚
          β–Ό args            β–Ό args
       0: Identity()      + Identity()
          β”‚
          β–Ό args
       1: ~ Identity()   
          β”‚
          β–Ό args
       2: Batchify(dim=0)
          β”‚
          β–Ό args
       3: Identity()     
    
    m.pd_forward
    
    Parallel:
    
       │└─▢ 0: Identity()
       β”‚  /  
       └──▢ 1: Identity()
    
    m.pd_postprocess
    
    Parallel:
    
       │└─▢ 0: Identity()
       β”‚  /  
       └──▢ 1: Identity()
    

    The postprocess should not depend on what happens in the very first transform of the compose.

    bug 
    opened by blythed 5
  • Saving Bug with `params`

    Saving Bug with `params`

    🐞 Bug

    import padl
    
    torch.nn = padl.transform(torch.nn)
    
    
    class HiddenState(torch.nn.Module):
        def __init__(self, layer):
            super().__init__()
            self.layer = layer
            
        def forward(self, x):
            return self.layer(x)[0]
    
    
    def build_classifier(
        rnn_layer,
        n_tokens,
    ):
        return (
            torch.nn.Embedding(n_tokens, rnn_layer.n_input)
            >> padl.transform(rnn_layer)
            >> padl.same[0]
            >> torch.nn.Linear(rnn_layer.n_hidden, n_tokens)
        )
    

    config.py

    from padl import params, save
    from my_codebase.models import build_classifier
    import torch
    
    
    rnn = torch.nn.GRU(
        **params(
            'rnn',
            input_size=64,
            hidden_size=512,
            num_layers=1,
        )
    )
    
    layer = build_classifier(
        rnn_layer=rnn,
        **params(
            'classifier',
            n_tokens=16,
        )
    )
    
    save(layer, 'my_classifier.padl')
    

    run config.py

    ---------------------------------------------------------------------------
    NameNotFound                              Traceback (most recent call last)
    <ipython-input-4-66f7d590781c> in <module>
    ----> 1 layer.pd_save('my_classifier.padl', force_overwrite=True)
    
    ~/lf1-io/padl/padl/transforms.py in pd_save(self, path, force_overwrite, strict_requirements)
        428
        429             with TemporaryDirectory('.padl') as dirname:
    --> 430                 self.pd_save(dirname, False, strict_requirements=strict_requirements)
        431                 rmtree(path)
        432                 copytree(dirname, path)
    
    ~/lf1-io/padl/padl/transforms.py in pd_save(self, path, force_overwrite, strict_requirements)
        438         for i, subtrans in enumerate(self._pd_all_transforms()):
        439             subtrans.pd_pre_save(path, i, options=options)
    --> 440         code, requirements = self._pd_dumps(return_requirements=True,
        441                                             strict_requirements=strict_requirements,
        442                                             path=path)
    
    ~/lf1-io/padl/padl/transforms.py in _pd_dumps(self, return_requirements, path, strict_requirements)
        665             be found. If *False* print a warning if that's the case.
        666         """
    --> 667         graph = self._pd_build_codegraph(name='_pd_main')
        668         Serializer.save_all(graph, path)
        669         code = graph.dumps()
    
    ~/lf1-io/padl/padl/transforms.py in _pd_build_codegraph(self, graph, name)
       1802             varname = transform.pd_varname(self._pd_call_info.scope)
       1803             # pylint: disable=protected-access
    -> 1804             transform._pd_build_codegraph(graph, varname)
       1805
       1806         self._pd_codegraph_find_dependencies(graph, todo)
    
    ~/lf1-io/padl/padl/transforms.py in _pd_build_codegraph(self, graph, name)
        554         todo = self._pd_codegraph_add_startnodes(graph, new_name)
        555
    --> 556         self._pd_codegraph_find_dependencies(graph, todo)
        557
        558         return graph
    
    ~/lf1-io/padl/padl/transforms.py in _pd_codegraph_find_dependencies(self, graph, todo)
        579             # Only triggered if KeyError or AttributeError is raised
        580             # find how *next_name* came into being
    --> 581             next_codenode = find_codenode(next_name,
        582                                           self._pd_external_full_dump_modules)
        583
    
    ~/lf1-io/padl/padl/dumptools/var2mod.py in find_codenode(name, full_dump_module_names)
        703 def find_codenode(name: ScopedName, full_dump_module_names=None) -> "CodeNode":
        704     """Find the :class:`CodeNode` corresponding to a :class:`ScopedName` *name*. """
    --> 705     (source, node), found_name = find_in_scope(name)
        706
        707     module_name = None
    
    ~/lf1-io/padl/padl/dumptools/symfinder.py in find_in_scope(scoped_name)
        788     if scope.module is None:
        789         raise NameNotFound(format_scoped_name_not_found(scoped_name))
    --> 790     source, node, name = find_scopedname(searched_name)
        791     if getattr(node, '_globalscope', False):
        792         scope = Scope.empty()
    
    ~/lf1-io/padl/padl/dumptools/symfinder.py in find_scopedname(scoped_name)
        963         module = sys.modules['__main__']
        964     try:
    --> 965         return find_scopedname_in_module(scoped_name, module)
        966     except TypeError as exc:
        967         if module is not sys.modules['__main__']:
    
    ~/lf1-io/padl/padl/dumptools/symfinder.py in find_scopedname_in_module(scoped_name, module)
        866 def find_scopedname_in_module(scoped_name: ScopedName, module):
        867     source = sourceget.get_module_source(module)
    --> 868     return find_scopedname_in_source(scoped_name, source)
        869
        870
    
    ~/lf1-io/padl/padl/dumptools/symfinder.py in find_scopedname_in_source(scoped_name, source, tree)
        846                         ScopedName(var_name, scoped_name.scope, (pos.lineno, pos.col_offset))
        847                     )
    --> 848     raise NameNotFound(
        849         format_scoped_name_not_found(scoped_name)
        850     )
    
    NameNotFound: Could not find "n_tokens" in scope "my_codebase.models.build_classifier".
    
    Please make sure that "n_tokens" is defined .
    
    bug 
    opened by blythed 0
  • Debug/ set breakpoint at pipeline position

    Debug/ set breakpoint at pipeline position

    Would be nice to stop the runtime at a particular transform, using some kind of coordinate system. E.g. set_breakpoint(my_transform, [1, 5, 2]). Next level would be to, in addition, add a breakpoint at a certain position even within the code of a transform.

    enhancement 
    opened by blythed 1
  • Automatically full dump things, depending on what kind of module it is

    Automatically full dump things, depending on what kind of module it is

    πŸ›°οΈ Feature

    Automatically full dump, things depending on what kind of module it is.

    Description

    Currently, import-dump is the default for all imported modules. That's unintuitive (and potentially dangerous).

    Solution

    Full-dump all imported modules by default, unless they are installed modules.

    enhancement 
    opened by wuhu 0
  • torchvision transforms do not work as padl transforms unless wrapped with padl.transform()

    torchvision transforms do not work as padl transforms unless wrapped with padl.transform()

    🐞 Bug

    (Some?) torchvision transforms do not work as padl transforms "out of the box", as implied by the documentation (Combining transforms)

    1. Steps: Define and evaluate a pipeline that includes a bare torchvision transform. tvt.Normalize and tvt.ToTensor definitely throw errors, but others should be tested.
    2. Code:
    from padl import batch, unbatch, transform
    from torchvision import transforms as tvt
    import torch.nn as nn
    
    pipeline = (
        tvt.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
        >> batch
        >> transform(nn.SoftMax(dim=1)) # trivial example
        >> unbatch
        ...
    )
    
    1. Input: Evaluating the pipeline in an ipykernel pipeline or running a forward pass pipeline.infer_apply(input) fails, as the attributes expected from a padl.transform are missing
    2. Error:
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    /empaia/glomeruli-segmentation/request.py in <cell line: 1>()
          [71](file:///empaia/glomeruli-segmentation/request.py?line=70)[ stream = await get_tile(session, slide, rect)
          ]()[72](file:///empaia/glomeruli-segmentation/request.py?line=71)[ content = await stream.read()
    ----> ]()[73](file:///empaia/glomeruli-segmentation/request.py?line=72)[ pipeline.infer_apply(content)
    
    File ~/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py:960, in Transform.infer_apply(self, inputs)
        ]()[957](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=956)[ in_args = inputs
        ]()[958](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=957)[ _pd_trace.clear()
    --> ]()[960](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=959)[ preprocess = self.pd_preprocess
        ]()[961](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=960)[ forward = self.pd_forward
        ]()[962](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=961)[ postprocess = self.pd_postprocess
    
    File ~/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py:882, in Transform.pd_preprocess(self)
        ]()[877](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=876)[ @property
        ]()[878](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=877)[ def pd_preprocess(self) -> "Transform":
        ]()[879](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=878)[     """The preprocessing part of the Transform.
        ]()[880](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=879)[ 
        ]()[881](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=880)[     The device must be propagated from self."""
    --> ]()[882](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=881)[     pre = self.pd_stages[0]
        ]()[883](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=882)[     pre.pd_to(self.pd_device)
        ]()[884](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=883)[     return pre
    
    File ~/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py:219, in Transform.pd_stages(self)
        ]()[215](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=214)[ @property
        ]()[216](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=215)[ @lru_cache(maxsize=128)
        ]()[217](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=216)[ def pd_stages(self):
        ]()[218](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=217)[     """Get a tuple of the pre-process, forward, and post-process stages."""
    --> ]()[219](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=218)[     _, splits, has_batchify, has_unbatchify = self._pd_splits()
        ]()[220](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=219)[     if has_batchify and has_unbatchify:
        ]()[221](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=220)[         preprocess, forward, postprocess = splits
    
    File ~/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py:1857, in Compose._pd_splits(self, input_components)
       ]()[1853](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1852)[ # for each sub-transform ...
       ]()[1854](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1853)[ for transform_ in self.transforms:
       ]()[1855](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1854)[     # ... see what comes out ...
       ]()[1856](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1855)[     output_components, sub_splits, sub_has_batchify, sub_has_unbatchify = \
    -> ]()[1857](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1856)[         transform_._pd_splits(output_components)
       ]()[1859](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1858)[     has_batchify = has_batchify or sub_has_batchify
       ]()[1860](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/padl/transforms.py?line=1859)[     has_unbatchify = has_unbatchify or sub_has_unbatchify
    
    File ~/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py:947, in Module.__getattr__(self, name)
        ]()[945](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py?line=944)[     if name in modules:
        ]()[946](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py?line=945)[         return modules[name]
    --> ]()[947](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py?line=946)[ raise AttributeError("'{}' object has no attribute '{}'".format(
        ]()[948](file:///home/theo/.cache/pypoetry/virtualenvs/glomeruli-segmentation-h4dBGJsh-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py?line=947)[     type(self).__name__, name))
    
    AttributeError: 'Normalize' object has no attribute '_pd_splits']()
    

    Note: _pd_splits is not the only attribute that cannot be found, errors referring to different _pd_* attributes are also thrown, although I cannot reproduce this right now Note: wrapping tvt.Normalize as transform(tvt.Normalize) resolves the issue, and gives the expected outcome.

    Expected behavior

    For the torchvision transform to compose like a padl transform "out of the box", as is shown in the documentation (see Combining transforms), i.e. without the need to be wrapped with padl.transform()

    Environment/System

    Additional context

    I would find it understandable if it were necessary to simply decorate the transforms with padl.transform, but then the documentation should be updated

    bug documentation good first issue 
    opened by theodore-evans 4
  • The output of `(t1 + t2) / t3` isn't flattened

    The output of `(t1 + t2) / t3` isn't flattened

    Either a 🐞 Bug or Unclear Documentation

    https://lf1-io.github.io/padl/latest/usage/combining_transforms.html#grouping-transforms says

    By default, Pipelines, such as rollouts and parallels, are flattened. This means that even if you use parentheses to group them, the output will be a flat tuple.

    I provide a code example (used with padl 0.2.5) where I expect the output to be a flat tuple but instead it's a tuple with a tuple inside

    import padl
    pipeline = (padl.Identity() + padl.transform(lambda x: x**2)) / padl.transform(lambda y: y + 100)
    print(pipeline((2, 5)))  # prints namedtuple(out_0=namedtuple(out_0=2, out_1=4), out_1=105)
    (pipeline >> padl.Identity() / padl.Identity() / padl.Identity())((2, 5))  # raises IndexError: tuple index out of range
    
    bug documentation 
    opened by philip-bl 4
Owner
LF1
AI Innovation Lab
LF1
Deep functional residue identification

DeepFRI Deep functional residue identification Citing @article {Gligorijevic2019, author = {Gligorijevic, Vladimir and Renfrew, P. Douglas and Koscio

Flatiron Institute 156 Dec 25, 2022
fklearn: Functional Machine Learning

fklearn: Functional Machine Learning fklearn uses functional programming principles to make it easier to solve real problems with Machine Learning. Th

nubank 1.4k Dec 7, 2022
Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

tf-fsvd TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions Cite If you f

Sami Abu-El-Haija 14 Nov 25, 2021
Data from "HateCheck: Functional Tests for Hate Speech Detection Models" (RΓΆttger et al., ACL 2021)

In this repo, you can find the data from our ACL 2021 paper "HateCheck: Functional Tests for Hate Speech Detection Models". "test_suite_cases.csv" con

Paul RΓΆttger 43 Nov 11, 2022
Recovering Brain Structure Network Using Functional Connectivity

Recovering-Brain-Structure-Network-Using-Functional-Connectivity Framework: Papers: This repository provides a PyTorch implementation of the models ad

null 5 Nov 30, 2022
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries and layers can then be written using Ivy, with simultaneous support for all frameworks. Ivy currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy. Check out the docs for more info!

Ivy 8.2k Jan 2, 2023
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe DΓ‘niel 138 Dec 17, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stockΒ price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
FTIR-Deep Learning - FTIR Deep Learning With Python

CANDIY-spectrum Human analyis of chemical spectra such as Mass Spectra (MS), Inf

Wei Mei 1 Jan 3, 2022
Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-NorΓ©n 21.8k Jan 9, 2023
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyKale 370 Dec 27, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods

ADGC: Awesome Deep Graph Clustering ADGC is a collection of state-of-the-art (SOTA), novel deep graph clustering methods (papers, codes and datasets).

yueliu1999 297 Dec 27, 2022