Myia prototyping

Related tags

Deep Learning myia
Overview

Myia

Myia is a new differentiable programming language. It aims to support large scale high performance computations (e.g. linear algebra) and their gradients. The main application Myia aims to support is research in artificial intelligence, in particular deep learning algorithms.

  • Define a model using a subset of Python, which is compiled to Myia (interfaces in other languages than Python may follow). This subset is general purpose and includes looping constructs and recursion. It excludes side effects and inplace operations.
  • Ask for the derivative of your model. Derivatives are fully supported for all control flow and all differentiable primitives.
  • Compile to efficient CPU and GPU code that optimizes use of your resources.

If you want to play with the current implementation, you can check out ALPHA.md

A short document explaining some of Myia's inner workings is available here

Status

Myia is currently under development and is not yet ready for use. We are optimistic about having an alpha version to play with around the start of 2020.

See Roadmap.

Motivation

Development in artificial intelligence has been undergoing a boom in the past decade, chiefly due to the success of deep neural networks. The training of a neural network is a sort of differentiable program: one writes a program to compute the output and a cost, and then one computes the derivative of that cost with respect to the model's parameters to determine how they should be updated.

Differentiation can be automated, but mainstream programming languages offer no support for this, hence the need for libraries or programming languages that can reliably support these applications.

The current leading solutions for deep learning fall in two camps:

Computation graph-based solutions such as TensorFlow, Theano and MXNet support automatic differentiation and are very well optimized, but they are not fully general, with only limited support for loops and none for general recursion. Thus models like recursive neural networks are tricky and awkward to write.

Operator overloading solutions such as PyTorch or Autograd use a dynamic approach to automatic differentiation which makes them much more general, but they are tightly coupled to the Python language and cannot reap the benefits of an optimizing compiler. They also involve a certain quantity of overhead per operation which discourages composing small cheap operations.

Myia's solution is to define a strongly-typed, general-purpose intermediate representation with an IR-level automatic differentiation transformation, which can then be compiled and optimized for various targets, thereby getting the best of both leading approaches.

Roadmap

Current

  • Parser: Supports def, if, for, while, operators, function calls, class and methods (limited support).
  • Intermediate representation: Implemented, with an array of utilities.
  • Debug VM: Faithfully runs the IR.
  • VM: Works on the simplified/optimized IR.
  • Primitives: Scalar primitives work, as well as map, reduce, broadcasting, 2D convolutions, concat/split, and many other operations.
  • Type system: Types are inferred without the need for annotations. Shapes can also be inferred. Myia supports recursive ADTs (e.g. tree data structures).
  • Optimization: Pattern-based optimizations, inlining, constant propagation, common subexpression elimination, closure conversion.
  • Automatic differentiation: Second order differentiation is not yet in working order.
  • GPU support: Using Relay or PyTorch.

In development

  • Compiler optimization: The compiler currently needs to be optimized to reduce compile times.
  • Auto-monadization: We are working to support print statements and random number generation through an auto-monadization system that can automatically keep track of the IO or RNG state.

Next steps

  • Error messages: We need to make sure that every likely mistake leads to an understandable and traceable error diagnosis.

Near future

  • Serialization: Serializing optimized graphs will allow for greater performance across runs and greater portability across systems.
  • Debugger: Intent is to have a step debugger for Myia. There used to be a working one for a previous version of the IR, so this should not pose a problem.
  • More Python syntax: break/continue.

After Beta

  • Even more Python syntax: Support for these features is not certain.
    • Augmented assignment (under restrictions)
    • yield and await
  • Support other languages: Which ones depend on demand. A new language is also a possibility.

Publications

Citation

If you use Myia for a scientific paper, please cite the above paper or mention Myia in the acknowledgements. It would be great if you could also let us know about it.

Comments
  • Parser

    Parser

    This is the converter/parser I committed here, adapted to the current IR in #3.

    Heavily WIP, just posting for early feedback.

    • [x] Add documentation
    • [x] Lots of tests
    • [x] Get rid of BLOCKS/RETURNS hack and actually return the generated functions
    • [x] Consider if an alternative for the CONSTANTS dictionary is needed
    opened by bartvm 58
  • Graph-based ANF IR

    Graph-based ANF IR

    Still WIP, but I made a start with the IR. Some points:

    • The nodes have an object hierarchy so that you can use isinstance.
    • I imagine that we might have situations in which we want to serialize the IR (e.g. create protobufs) and such, so trying to keep it pretty minimal.
    • Edges are one-directional right now, since bidirectionality is only needed in some passes (and requires more bookkeeping when mutating the graph) I think it's best to try and keep def-use edges separate somehow
    • I was thinking we can hide all the debug information in a single dictionary. Debug information would include things such as human readable names of the variable and the Python AST node that created the node from the parser.
    opened by bartvm 44
  • Retrieve graph nesting structure and free variables

    Retrieve graph nesting structure and free variables

    This adds the myia.cconv.NestingAnalyzer class, used as follows:

    def f(x):
        def g():
            return x
        return g
    
    f_graph = parse(f)
    
    na = NestingAnalyzer().run(f_graph)
    assert na.parents == {f_graph: None, g_graph: f_graph}
    assert na.fvs == {f_graph: set(), g_graph: {x_node}}
    

    (This won't run because g_graph etc. are not set to the corresponding graphs and nodes, but it should illustrate the point nonetheless.)

    opened by breuleux 14
  • Use pyproject.toml to manage dependencies

    Use pyproject.toml to manage dependencies

    All dependencies are now defined in pyproject.toml and most are installed with poetry install. Conda dependencies are put in custom sections in the file and you have to use scripts/make_deps.py to generate a requirements.conda file. scripts/install-deps-[g/c]pu.sh will do it for you.

    This also moves some scripts into scripts/.

    Oh, and PyTorch is upgraded to 1.3.0. Unfortunately there are a few errors with 1.4.0 (to investigate).

    opened by breuleux 12
  • Implementation of random number generator for Myia

    Implementation of random number generator for Myia

    Hi @abergeron @breuleux

    I just make this PR to ease any discussion around the RNG implementation.

    Currently, I have implemented the RNG in pure Python, based on Theano code. For reference:

    • Theano code is here: https://github.com/Theano/Theano/blob/master/theano/sandbox/rng_mrg.py
    • Original Java implementation by Pierre Lecuyer is here: https://github.com/umontreal-simul/ssj/blob/master/src/main/java/umontreal/ssj/rng/MRG31k3p.java

    Implementation currently seems to work. I also added a test to compare output with the Theano output, and it currently produces same results.

    Then, my next step was to try to compile the python implementation into myia, but I am facing many issues:

    • Python implementation checks input seeds, as all values are not allowed, and generate some exceptions if bad seeds are given. It seems I can not currently compile exceptions raising with Myia, though I saw a raise primitive in operations. I don't yet know why compilation fails.
    • I need to generate the output array and populate it with random values, but I don't yet have a way to create new data. @breuleux told me about scalar_to_array + distribute op, but, from what I saw, it seems these operations are currently used in zero_like op (isn't it?), which is not suited here. So I guess I need to create a zeros operation that can take just shape and dtype as argument, and I guess I should use np.zeros as entry point (ie. value to be replaced by the op later in compilation).
    • I also need to create the initial random state from given seed, which can be either an integer or a ready-to-use integer vector with 6 values. I guess I should create something like np.asarray op here to ease this step.
    • I simplified the random generator so that it uses only one vector as state to update, instead of using a matrix as in Theano (in Theano, this should be equivalent to get random numbers with parameter nstreams=1). Thus, in intermediate functions, I could just return the new updated state vector (it contains only 6 values) instead of trying to update a potential huge matrix inplace. So, I removed a place where inplace update was used, but I will still need to make inplace updates when generating random numbers, as I will need to a) create the output array, and b) update each array value with a generated random scalar.

    This is where I am, currently. What do you think?

    opened by notoraptor 12
  • vm

    vm

    In the spirit of early PRs, here is my code for the VM.

    What I'm trying to do here is to execute the graph directly to make easier to convert into a debugger. I'm still handling tail calls correctly so that we don't need an infinite amount of memory.

    This is currently very inefficient for temporary storage of the computed values (as in all values are kept until the function returns, with the exception that tail calls are not considered returns). There can also be situations where we will recompute values more than once when they are used across internal functions. The results should always be good though.

    opened by abergeron 9
  • (wip)(do not merge) Run all tests on relay

    (wip)(do not merge) Run all tests on relay

    Missing operations in relay backend to make all tests pass:

    • array_setitem
    • conv_transpose2d
    • gather
    • scatter
    • scatter_add

    I am currently working on conv_transpose2d. Issues:

    • Seems limited to only groups=1 and dilation=(1, 1): https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/op/strategy/x86.py#L169
    • Persistent error when running pytest -xvvs tests/frontends/test_pytorch_ops.py::test_torch_conv2d on this branch:
    (myia) notoraptor@notoraptor-linux:~/mila/dev/git/myia$ pytest -xvvs tests/frontends/test_pytorch_ops.py::test_torch_conv2d
    =========================================================================================== test session starts ============================================================================================
    platform linux -- Python 3.7.6, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /home/notoraptor/anaconda3/envs/myia/bin/python
    cachedir: .pytest_cache
    rootdir: /media/win/Users/notoraptor/mila/dev/git/myia, inifile: pytest.ini
    plugins: cov-2.8.1
    collected 6 items                                                                                                                                                                                          
    
    tests/frontends/test_pytorch_ops.py::test_torch_conv2d[relay-cpu-grad0] <- tests/multitest.py ANTLR runtime and generated code versions disagree: 4.8!=4.7.2
    ANTLR runtime and generated code versions disagree: 4.8!=4.7.2
    Cannot find config for target=llvm, workload=('conv2d_NCHWc.x86', ('TENSOR', (2, 6, 4, 5), 'float32'), ('TENSOR', (3, 6, 3, 3), 'float32'), (2, 3), (3, 2, 3, 2), (1, 1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
    FAILED
    
    ================================================================================================= FAILURES =================================================================================================
    ____________________________________________________________________________________ test_torch_conv2d[relay-cpu-grad0] ____________________________________________________________________________________
    
    test = <tests.multitest.MyiaFunctionTest object at 0x7f7570a72650>
    
        def runtest(test):
    >       test.run(fn)
    
    tests/multitest.py:66: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    tests/multitest.py:166: in run
        return self.runtest(self, fn, **self.spec)
    tests/frontends/test_pytorch_ops.py:147: in _fwd_and_bwd
        res = gpipeline.run(input=fn, argspec=[*argspec, sens_type])
    myia/pipeline/pipeline.py:144: in run
        return self.make()(**args)
    myia/pipeline/pipeline.py:201: in __call__
        return self[:](**args)
    myia/pipeline/pipeline.py:245: in __call__
        raise results['error']
    myia/pipeline/pipeline.py:229: in run_and_catch
        results = fn(**valid_args)
    myia/pipeline/steps.py:402: in step_compile
        out = resources.backend.compile(graph, argspec, outspec)
    myia/pipeline/resources.py:201: in compile
        return self.backend.compile(graph, argspec, outspec)
    myia/compile/backends/__init__.py:277: in compile
        return self.proc.call_method('compile', graph, argspec, outspec)
    myia/compile/channel/__init__.py:131: in call_method
        return self._read_msg()
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <myia.compile.channel.RPCProcess object at 0x7f756e1fb5d0>
    
        def _read_msg(self):
            RemoteHandle.current_channel = self
            try:
                res = self.loader.get_data()
            finally:
                RemoteHandle.current_channel = None
            if isinstance(res, LoadedError):
    >           raise res
    E           myia.utils.serialize.LoadedError: Traceback (most recent call last):
    E           
    E             File "/media/win/Users/notoraptor/mila/dev/git/myia/myia/compile/channel/__main__.py", line 38, in _rpc_server
    E               res = meth(*args, **kwargs)
    E           
    E             File "/media/win/Users/notoraptor/mila/dev/git/myia/myia/compile/backends/__init__.py", line 297, in compile
    E               return handle(self.real.compile(graph, argspec, outspec))
    E           
    E             File "/media/win/Users/notoraptor/mila/dev/git/myia/myia/compile/backends/relay.py", line 884, in compile
    E               self.exec_kind)
    E           
    E             File "/media/win/Users/notoraptor/mila/dev/git/myia/myia/compile/backends/relay.py", line 675, in run
    E               add_functions(self.module, function_map, self.types)
    E           
    E             File "/media/win/Users/notoraptor/mila/dev/git/myia/myia/compile/backends/relay_helpers.py", line 275, in add_functions
    E               mod[gv] = funcs[gv]
    E           
    E             File "/home/notoraptor/anaconda3/envs/myia/lib/python3.7/site-packages/tvm/ir/module.py", line 75, in __setitem__
    E               return self._add(var, val)
    E           
    E             File "/home/notoraptor/anaconda3/envs/myia/lib/python3.7/site-packages/tvm/ir/module.py", line 84, in _add
    E               _ffi_api.Module_Add(self, var, val, update)
    E           
    E             File "tvm/_ffi/_cython/./packed_func.pxi", line 308, in tvm._ffi._cy3.core.PackedFuncBase.__call__
    E           
    E             File "tvm/_ffi/_cython/./packed_func.pxi", line 253, in tvm._ffi._cy3.core.FuncCall
    E           
    E             File "tvm/_ffi/_cython/./base.pxi", line 159, in tvm._ffi._cy3.core.CALL
    E           
    E           tvm._ffi.base.TVMError: Traceback (most recent call last):
    E             [bt] (8) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(TVMFuncCall+0x65) [0x7f8ea4b23a25]
    E             [bt] (7) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(+0x539dc4) [0x7f8ea440fdc4]
    E             [bt] (6) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(+0x539a24) [0x7f8ea440fa24]
    E             [bt] (5) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0x425) [0x7f8ea440d875]
    E             [bt] (4) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(+0x5322d4) [0x7f8ea44082d4]
    E             [bt] (3) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x1db) [0x7f8ea49c545b]
    E             [bt] (2) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(+0xaeec28) [0x7f8ea49c4c28]
    E             [bt] (1) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(+0x5245fa) [0x7f8ea43fa5fa]
    E             [bt] (0) /home/notoraptor/anaconda3/envs/myia/lib/libtvm.so(+0x423d01) [0x7f8ea42f9d01]
    E             File "/home/user/conda-bld/tvm-libs_1584032126820/work/src/ir/error.cc", line 133
    E           TVMError: 
    E           Error(s) have occurred. The program has been annotated with them:
    E           
    E           In `main`: 
    E           v0.0.4
    E           fn (%v_parameter25: Tensor[(2, 6, 4, 5), float32], %v_parameter26: Tensor[(3, 6, 3, 3), float32], %v_parameter27: Tensor[(3), float32], %v_parameter28: float32) -> (Tensor[(2, 6, 4, 5), float32], Tensor[(3, 6, 3, 3), float32], Tensor[(3), float32]) {
    E             let %seq.0 = (meta[relay.Constant][0],);
    E             let %seq.1 = (meta[relay.Constant][1], meta[relay.Constant][2], meta[relay.Constant][3], meta[relay.Constant][4]);
    E             let %seq.2 = (meta[relay.Constant][5], meta[relay.Constant][6], meta[relay.Constant][7], meta[relay.Constant][8]);
    E             let %seq.3 = broadcast_to(%v_parameter28, meta[relay.attrs.InitOpAttrs][0]);
    E             let %seq.4 = sum(%seq.3, axis=[0, 2, 3], keepdims=True);
    E             let %seq.5 = reshape(%seq.4, newshape=[3]);
    E             let %seq.6 = meta[relay.Constant][9];
    E             let %seq.7 = (meta[relay.Constant][10], meta[relay.Constant][11]);
    E             let %seq.8 = (meta[relay.Constant][12], meta[relay.Constant][13]);
    E             let %seq.9 = (meta[relay.Constant][14], meta[relay.Constant][15]);
    E             let %seq.10 = (meta[relay.Constant][16], meta[relay.Constant][17], meta[relay.Constant][18], meta[relay.Constant][19]);
    E             %0 = reshape(%v_parameter25, newshape=[1, -1, 0, 0]);
    E             %1 = tile(%seq.3, reps=[1, 6, 1, 1]);
    E             %2 = reshape(%1, newshape=[-1, 1, 0, 0]);
    E             %3 = nn.conv2d(%0, %2, padding=[3, 2, 3, 2], dilation=[2, 3], groups=12);
    E             %4 = reshape(%3, newshape=[2, 6, 3, 4, 3]);
    E             %5 = sum(%4, axis=[0]);
    E             %6 = transpose(%5, axes=[1, 0, 2, 3]);
    E             let %seq.11 = strided_slice(%6, begin=[0, 0, 0, 0], end=[None, None, 3, 3]);
    E             let %seq.12 = (1, 0);
    E             let %seq.13 = ();
    E             let %seq.14 = nn.conv2d_transpose(%seq.3, %v_parameter26, channels=3, kernel_size=[3, 3], strides=[2, 3], output_padding=[1, 0], padding=[3, 2, 3, 2]) in particular dimension 1 conflicts 3 does not match 6; unable to unify: `Tensor[(3, 3, 3, 3), float32]` and `Tensor[(3, 6, 3, 3), float32]`; in particular dimension 1 conflicts 3 does not match 6; unable to unify: `Tensor[(2, 3, 4, 5), float32]` and `Tensor[(2, 6, 4, 5), float32]`; ;
    E             let %seq.15 = (%seq.14, %seq.11, %seq.5);
    E             %seq.15
    E           }
    E           // meta data omitted. you can use show_meta_data=True to include meta data
    
    myia/compile/channel/__init__.py:148: LoadedError
    ========================================================================================= short test summary info ==========================================================================================
    FAILED tests/frontends/test_pytorch_ops.py::test_torch_conv2d[relay-cpu-grad0] - myia.utils.serialize.LoadedError: Traceback (most recent call last):
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    ============================================================================================ 1 failed in 17.73s ============================================================================================
    

    The main error is in this line: let %seq.14 = nn.conv2d_transpose(%seq.3, %v_parameter26, channels=3, kernel_size=[3, 3], strides=[2, 3], output_padding=[1, 0], padding=[3, 2, 3, 2]) in particular dimension 1 conflicts 3 does not match 6; unable to unify:Tensor[(3, 3, 3, 3), float32]andTensor[(3, 6, 3, 3), float32]; in particular dimension 1 conflicts 3 does not match 6; unable to unify:Tensor[(2, 3, 4, 5), float32]andTensor[(2, 6, 4, 5), float32]; ;. I still don't know what causes this.

    opened by notoraptor 7
  • Gradient generation

    Gradient generation

    This adds gradients for various primitives, and gradient generation. The tests pass and there is 100% coverage, but it's worth mentioning the whole thing is slightly rushed. A few comments:

    • There are two xfails which should be fixed at some point.
    • Support for array operations has not been added yet.
    • The default pipeline doesn't support gradients yet. The one that does is in test_grad, but it's a bit wonky (likely responsible for one of the xfails).
    • The grad transform isn't very well documented.
    • J/Jinv won't work with the debug VM right now.
    opened by breuleux 7
  • Better string representations of nodes and graphs

    Better string representations of nodes and graphs

    Factored this out of #4 so that it is a bit easier to review. This fixes a bunch of issues with the node naming. The current naming:

    >>> def f(x):
    ...     return x
    >>> parse(f)
    <f = Graph(parameters=[x = Parameter(<f = Graph(parameters=[...])>)])>
    >>> parse(f).return_
    Apply(Inputs([Constant(<myia.primops.Return object at 0x106271630>), x = Parameter(<f = Graph(parameters=[x = Parameter(<f = Graph(parameters=[...])>)])>)]), <f = Graph(parameters=[x = Parameter(<f = Graph(parameters=[...])>)])>)
    

    With this PR:

    >>> parse(f)
    <myia.anf_ir.Graph(name=f, parameters=[x], return_=_apply1) object at 0x105c72da0>
    >>> parse(f).return_
    <myia.anf_ir.Apply(name=_apply2, inputs=[_constant3, x], graph=f) object at 0x105c82eb8>
    

    Main changes:

    • Make generated names valid variable names
    • Base generated names on type of object we are naming
    • Use str representation where needed to avoid recursively printing the entire graph
    • Ensure uniqueness of repr by printing object ID like CPython does
    • Only represent constant with value if its a literal (to avoid printing really long strings when its value is e.g. a graph)
    opened by bartvm 7
  • Add tan to relay backend.

    Add tan to relay backend.

    @abergeron @breuleux

    This PR adds tan to relay backend.

    NB:

    TVM API changed again, and now it seems tvm.relay.Module does not exist anymore. So, I guess we would need to update whole relay backend again to support new TVM API before merging this pull request.

    On my computer, I use this code to compile relay with current TVM code:

    import tvm
    from tvm import relay
    from tvm.contrib import graph_runtime
    
    
    # Old way, does not work anymore currently.
    def compile_function_old(inputs, output):
        f = relay.Function(list(inputs), output)
        m = relay.Module({})
        fn = relay.GlobalVar('f')
        m[fn] = f
        e = relay.create_executor(mod=m)
        c = e.evaluate(m[fn])
        return c
    
    
    # New way. I don't know if another way exists, but this works now.
    def compile_function(inputs, output):
        ir = relay.Function(list(inputs), output)
        mod = tvm.IRModule.from_expr(ir)
        target = tvm.target.create('llvm')
        graph, lib, params = relay.build(mod, target=target)
        ctx = tvm.cpu()
        module = graph_runtime.create(graph, lib, ctx)
    
        def fn(*args):
            for i in range(len(inputs)):
                module.set_input(i, args[i])
            module.set_input(**params)
            module.run()
            out = module.get_output(0)
            return out
    
        return fn
    
    opened by notoraptor 6
  • Syntax checker

    Syntax checker

    Going through parse.py in master I noticed that the parser is responsible for checking whether we support the given AST, and if not, raising an informative syntax error. In Tangent we actually do this in a separate pass, which I think might be a good idea. It keeps the parser a bit cleaner (no need for checks) and separates two logical processes.

    I also found a comment that says "maybe inherit from SyntaxError? Investigate." so I had a look, and it is a good idea. If you set the fields appropriately, it will automatically pretty print the error message with the filename, a little arrow, etc.

    bartvm-macbookpro3:myia bartvm$ python myia/fence.py
    Traceback (most recent call last):
      File "myia/fence.py", line 120, in <module>
        fence.visit(ast.parse(textwrap.dedent(inspect.getsource(f))))
      File "myia/fence.py", line 88, in visit
        self.generic_visit(node)
      File "/Users/bartvm/anaconda3/lib/python3.6/ast.py", line 261, in generic_visit
        self.visit(item)
      File "myia/fence.py", line 88, in visit
        self.generic_visit(node)
      File "/Users/bartvm/anaconda3/lib/python3.6/ast.py", line 261, in generic_visit
        self.visit(item)
      File "myia/fence.py", line 93, in visit
        return visitor(node)
      File "myia/fence.py", line 98, in visit_Pass
        self.raise_("you shall not pass")
      File "myia/fence.py", line 109, in raise_
        self.lines[self.lineno - 1], message)
      File "myia/fence.py", line 113
        pass
        ^
    __main__.MyiaSyntaxError: invalid syntax: you shall not pass
    

    I attached a minimal proof of concept of this approach. The syntax checker is conservative i.e. it rejects everything unless it's been explicitly marked as supported.

    opened by bartvm 6
  • Bump pillow from 6.2.2 to 9.3.0 in /myia_frontend_pytorch

    Bump pillow from 6.2.2 to 9.3.0 in /myia_frontend_pytorch

    Bumps pillow from 6.2.2 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 6.2.2 to 9.3.0 in /myia_backend_pytorch

    Bump pillow from 6.2.2 to 9.3.0 in /myia_backend_pytorch

    Bumps pillow from 6.2.2 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 6.2.2 to 9.0.1

    Bump pillow from 6.2.2 to 9.0.1

    Bumps pillow from 6.2.2 to 9.0.1.

    Release notes

    Sourced from pillow's releases.

    9.0.1

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.1.html

    Changes

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [@​radarhere, @​hugovk]
    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]

    9.0.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.0.1 (2022-02-03)

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [radarhere, hugovk]

    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]

    9.0.0 (2022-01-02)

    • Restrict builtins for ImageMath.eval(). CVE-2022-22817 #5923 [radarhere]

    • Ensure JpegImagePlugin stops at the end of a truncated file #5921 [radarhere]

    • Fixed ImagePath.Path array handling. CVE-2022-22815, CVE-2022-22816 #5920 [radarhere]

    • Remove consecutive duplicate tiles that only differ by their offset #5919 [radarhere]

    • Improved I;16 operations on big endian #5901 [radarhere]

    • Limit quantized palette to number of colors #5879 [radarhere]

    • Fixed palette index for zeroed color in FASTOCTREE quantize #5869 [radarhere]

    • When saving RGBA to GIF, make use of first transparent palette entry #5859 [radarhere]

    • Pass SAMPLEFORMAT to libtiff #5848 [radarhere]

    • Added rounding when converting P and PA #5824 [radarhere]

    • Improved putdata() documentation and data handling #5910 [radarhere]

    • Exclude carriage return in PDF regex to help prevent ReDoS #5912 [hugovk]

    • Fixed freeing pointer in ImageDraw.Outline.transform #5909 [radarhere]

    ... (truncated)

    Commits
    • 6deac9e 9.0.1 version bump
    • c04d812 Update CHANGES.rst [ci skip]
    • 4fabec3 Added release notes for 9.0.1
    • 02affaa Added delay after opening image with xdg-open
    • ca0b585 Updated formatting
    • 427221e In show_file, use os.remove to remove temporary images
    • c930be0 Restrict builtins within lambdas for ImageMath.eval
    • 75b69dd Dont need to pin for GHA
    • cd938a7 Autolink CWE numbers with sphinx-issues
    • 2e9c461 Add CVE IDs
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Inference tests from master branch

    Inference tests from master branch

    I reopend my PR here, as @breuleux PR is now merged.

    • Still WIP, I will fix more tests today.
    • Numpy dependencies removed.
    • Basic support for dict added.
    • currently 187 failed, 170 passed, 3 xfailed among added tests
    opened by notoraptor 4
  • array_getitem and array_setitem with single item in each dim non-statically

    array_getitem and array_setitem with single item in each dim non-statically

    It currently is not possible to array_getitem or array_setitem with single item in each dim non-statically; because current interface always assumes slices (even if slice is interval of size 1).

    opened by ethancaballero 0
Owner
Mila
Quebec Artificial Intelligence Institute
Mila
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 7, 2023
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 8, 2023
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 3, 2023
A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

OpenCDA OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular autom

UCLA Mobility Lab 726 Dec 29, 2022
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models. Hyperactive: is very easy to lear

Simon Blanke 422 Jan 4, 2023
:art: Diagram as Code for prototyping cloud system architectures

Diagrams Diagram as Code. Diagrams lets you draw the cloud system architecture in Python code. It was born for prototyping a new system architecture d

MinJae Kwon 27.5k Dec 30, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 7, 2023
Emulator for rapid prototyping of Software Defined Networks

Mininet: Rapid Prototyping for Software Defined Networks The best way to emulate almost any network on your laptop! Mininet 2.3.0b2 What is Mininet? M

Mininet 4.7k Jan 5, 2023
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 8, 2023
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 3, 2023
Pyroomacoustics is a package for audio signal processing for indoor applications. It was developed as a fast prototyping platform for beamforming algorithms in indoor scenarios.

Summary Pyroomacoustics is a software package aimed at the rapid development and testing of audio array processing algorithms. The content of the pack

Audiovisual Communications Laboratory 1k Jan 9, 2023
:art: Diagram as Code for prototyping cloud system architectures

Diagrams Diagram as Code. Diagrams lets you draw the cloud system architecture in Python code. It was born for prototyping a new system architecture d

MinJae Kwon 27.5k Jan 4, 2023
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.

Dopamine Dopamine is a research framework for fast prototyping of reinforcement learning algorithms. It aims to fill the need for a small, easily grok

Google 10k Jan 7, 2023
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 5, 2023
A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.

OpenCDA OpenCDA is a SIMULATION tool integrated with a prototype cooperative driving automation (CDA; see SAE J3216) pipeline as well as regular autom

UCLA Mobility Lab 726 Dec 29, 2022
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models. Hyperactive: is very easy to lear

Simon Blanke 422 Jan 4, 2023
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

dbd: database prototyping tool dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL d

Zdenek Svoboda 47 Dec 7, 2022