xitorch: differentiable scientific computing library

Overview

xitorch: differentiable scientific computing library

Build Docs Code coverage

xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely used in scientific computing applications as well as deep learning.

The documentation can be found at: https://xitorch.readthedocs.io/

Example

Finding root of a function:

import torch
from xitorch.optimize import rootfinder

def func1(y, A):  # example function
    return torch.tanh(A @ y + 0.1) + y / 2.0

# set up the parameters and the initial guess
A = torch.tensor([[1.1, 0.4], [0.3, 0.8]]).requires_grad_()
y0 = torch.zeros((2,1))  # zeros as the initial guess

# finding a root
yroot = rootfinder(func1, y0, params=(A,))

# calculate the derivatives
dydA, = torch.autograd.grad(yroot.sum(), (A,), create_graph=True)
grad2A, = torch.autograd.grad(dydA.sum(), (A,), create_graph=True)

Modules

  • linalg: Linear algebra and sparse linear algebra module
  • optimize: Optimization and root finder module
  • integrate: Quadrature and integration module
  • interpolate: Interpolation

Requirements

  • python 3.6 or higher
  • pytorch 1.6 or higher (install here)

Getting started

After fulfilling all the requirements, type the commands below to install xitorch

python -m pip install xitorch

Or if you want to install from source:

git clone https://github.com/xitorch/xitorch/
cd xitorch
python -m pip install -e .

Gallery

Neural mirror design (example 01):

neural mirror design

Initial velocity optimization in molecular dynamics (example 02):

molecular dynamics

Comments
  • Joint eigenvalues of a pair of matrix

    Joint eigenvalues of a pair of matrix

    Is your feature request related to a problem? Please describe. In a nutshell, I would like to have a differentiable function for joint eigenvalues computation, an MATLAB interface can be found here https://stackoverflow.com/questions/36551182/how-can-i-find-the-joint-eigenvalues-of-two-matrices-in-matlab.

    Describe the solution you'd like Specifically, I would like to use a distance of covariance matrices as a loss function. image Upon my research, I think the torch.lobpcg() function can do that, but only for k largest, and the back propagation does not work due to this thread which you are also involved in (https://github.com/pytorch/pytorch/issues/38948).

    Describe alternatives you've considered I'm way too layman to solve algebra like this. But based on my research, I've located it as generalized eigenvalue problem (http://fourier.eng.hmc.edu/e161/lectures/algebra/node7.html). I also find this work around (https://www.alglib.net/eigen/symmetric/generalizedsymmevd.php), do you have any idea on how to make such joint eigenvalue solver differentiable?

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by bsun0802 11
  • Fixed bug in GMRES

    Fixed bug in GMRES

    This PR fixes a bug in 7919bb0 that resulted in a large residual error even when the algorithm converged. The performance is now comparable to both cg and bicgstab.

    Tested on A of size (100, 100) and b of size (2, 100, 100), the max norm of residual error in each case is:

    • gmres 6.88e-06
    • bicgstab 1.19e-05
    • cg 1.01e-05
    opened by yhl48 2
  • Complex hermitian matrices not recognized as hermitian.

    Complex hermitian matrices not recognized as hermitian.

    Describe the bug We were using xitorch with complex hermitian matrices and noticed that it would not check for hermiticity but for symmetry, therefore it raised a wrong error message. is_hermitian = torch.allclose(mat, mat.transpose(-2, -1)) Of course this can be fixed by adding a .conj() (tested it error message disappears), however I am not sure if anything else in the code relies on the matrix being real.

    To Reproduce

    import torch, xitorch
    from xitorch import linalg 
    
    a = torch.rand(2,2) + 1j*torch.rand(2,2)
    a = a+a.T.conj()
    op = xitorch.LinearOperator.m(a)
    linalg.symeig(op)
    

    Expected behavior Complex hermitian operators are recognized as such. Systems:

    • Python version: 3.88
    • PyTorch version: 1.9
    • xitorch version: 0.2.0
    bug 
    opened by JonathanSchmidt1 2
  • GMRES

    GMRES

    Generalised minimal residual method

    Basic implementation of GMRES for now, future refinements will include:

    1. restart mechanism
    2. termination tolerance
    3. preconditioning
    4. faster computation by replacing lstsq with triangular_solve (potentially)

    Current implementation is faster than scipy, compared by using A.size = (100000, 2, 2), B.size = (100000, 2, 1): scipy takes 22.068294763565063 gmres takes 26.61129903793335

    Note that the current implementation would fail two tests due to assert torch.allclose(ax, bmat), i.e. the deviation from the true value is quite big, at around 1e-3. This is partially due to the slow computation, since it is hard to run it for longer iterations, and will be updated in the next pull request. For now, to pass the tests, rtol is set to a big value in commit 23b0180.

    opened by yhl48 1
  • Example 2 broken

    Example 2 broken

    Running the second example 02, I run into following error:

    Traceback (most recent call last): File "/home/jwittke/py_tmp/main.py", line 125, in mainopt() File "/home/jwittke/py_tmp/main.py", line 110, in mainopt loss, yt = get_loss(pos, vel, ts, pos_target) File "/home/jwittke/py_tmp/main.py", line 29, in get_loss yt = solve_ivp(dydt, ts, y0, fwd_options={"method": "rk4"}) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/integrate/solve_ivp.py", line 92, in solve_ivp return _SolveIVP.apply(pfcn, ts, fwd_options, bck_options, len(params), y0, *params, *pfcn.objparams()) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/integrate/solve_ivp.py", line 114, in forward yt = solver(pfcn, ts, y0, params, **config) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/_impls/integrate/ivp/adaptive_rk.py", line 178, in rk45_adaptive return _rk_adaptive(fcn, ts, y0, params, RK45, **kwargs) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/_impls/integrate/ivp/adaptive_rk.py", line 164, in _rk_adaptive return solver.solve() File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/_impls/integrate/ivp/adaptive_rk.py", line 72, in solve rk_state = self._step(rk_state, ts[i]) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/_impls/integrate/ivp/adaptive_rk.py", line 83, in _step rk_state, t1_achieved = self._single_step(rk_state, t1) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/_impls/integrate/ivp/adaptive_rk.py", line 98, in _single_step ynew, fnew = rk_step(self.func, t0, y0, f0, hstep, abck) File "/home/jwittke/anaconda3/lib/python3.7/site-packages/xitorch/_impls/integrate/ivp/adaptive_rk.py", line 16, in rk_step K[s] = func(t + c * h, y + dy) RuntimeError: The size of tensor a (2) must match the size of tensor b (128) at non-singleton dimension 3

    I am using:

    conda version : 4.9.2 conda-build version : 3.18.11 python version : 3.7.6.final.0 platform : linux-64 user-agent : conda/4.9.2 requests/2.23.0 CPython/3.7.6 Linux/4.4.0-194-generic ubuntu/16.04.7 glibc/2.23

    pytorch 1.7.1 py3.7_cpu_0 [cpuonly] pytorch

    bug 
    opened by jwittke 1
  • Custom termination criterion for nonlinear solver

    Custom termination criterion for nonlinear solver

    Is your feature request related to a problem? Please describe. When using Xitorch through DQC, termination criteria for the Kohn-Sham solver are given in terms of tolerances on the self-consistent parameter. However, in most research circumstances, tolerances on the energy of the system are a more appropriate measure. These cannot currently be implemented in DQC, and the change to accommodate them has to be made in Xitorch.

    Describe the solution you'd like A keyword argument custom_terminator passed to xitorch._impls.optimize.root.rootsolver._nonlin_solver through the fwd_options dictionary from DQC's KS.run(...).

    The value passed into the keyword argument should be an object which defines a method check(x: torch.Tensor, y: torch.Tensor, dx: torch.Tensor) -> bool, and evaluates to True if the termination criterion has been met (otherwise False).

    If no value is passed, this should default to xitorch._impls.optimize.root.rootsolver.TerminationCondition((f_tol, f_rtol, y_norm, x_tol, x_rtol) which is the current criterion. The signature of the check method on TerminationCondition is modified from check(xnorm: float, ynorm: float, dxnorm: float) -> bool to the more generic signature given above, for consistency.

    Describe alternatives you've considered

    • An external criterion varying x_tol, and f_tol depending on the energy of the current solution → inaccurate and requires significant changes to DQC
    • Adding an energy termination criterion to Xitorch and providing a boolean flag to determine which criterion should be used → less flexible, cannot provide a history of the convergence energies

    Overall, this seems to be a smooth solution, providing maximal flexibility for minimal changes, and expanding on the scientific capabilities of both Xitorch and DQC. No other alternatives considered come close to these effects.

    Additional context This change ensures we can use DQC and Xitorch with significant speed-up without compromising accuracy. I can implement it myself, and would ideally have it approved before Christmas.

    enhancement 
    opened by KarimAED 0
  • Cannot install from sdist obtained from PyPI

    Cannot install from sdist obtained from PyPI

    Describe the bug

    Installation with pip from sdist fails (version 0.3.0).

    To Reproduce

    Presumably, the directory layout from the git repository is assumed in setup.py, but the relevant files are not actually included in the sdist.

    source tree in: /home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work
    export PREFIX=/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla
    export BUILD_PREFIX=/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/_build_env
    export SRC_DIR=/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work
    Using pip 22.0.4 from $PREFIX/lib/python3.10/site-packages/pip (python 3.10)
    Non-user install because user site-packages disabled
    Ignoring indexes: https://pypi.org/simple
    Created temporary directory: /tmp/pip-ephem-wheel-cache-aj1o1cc3
    Created temporary directory: /tmp/pip-req-tracker-hqkvtmg0
    Initialized build tracking at /tmp/pip-req-tracker-hqkvtmg0
    Created build tracker: /tmp/pip-req-tracker-hqkvtmg0
    Entered build tracker: /tmp/pip-req-tracker-hqkvtmg0
    Created temporary directory: /tmp/pip-install-gylksics
    Processing $SRC_DIR
      Added file://$SRC_DIR to build tracker '/tmp/pip-req-tracker-hqkvtmg0'
      Running setup.py (path:$SRC_DIR/setup.py) egg_info for package from file://$SRC_DIR
      Created temporary directory: /tmp/pip-pip-egg-info-uaj5m5_v
      Running command python setup.py egg_info
      Preparing metadata (setup.py): started
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work/setup.py", line 49, in <module>
          install_requires=get_requirements("requirements.txt"),
        File "/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work/setup.py", line 32, in get_requirements
          with open(absdir(fname), "r") as f:
      FileNotFoundError: [Errno 2] No such file or directory: '/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work/requirements.txt'
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> See above for output.
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
      full command: /home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/bin/python -c '
      exec(compile('"'"''"'"''"'"'
      # This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
      #
      # - It imports setuptools before invoking setup.py, to enable projects that directly
      #   import from `distutils.core` to work with newer packaging standards.
      # - It provides a clear error message when setuptools is not installed.
      # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
      #   setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
      #     manifest_maker: standard file '"'"'-c'"'"' not found".
      # - It generates a shim setup.py, for handling setup.cfg-only projects.
      import os, sys, tokenize
    
      try:
          import setuptools
      except ImportError as error:
          print(
              "ERROR: Can not execute `setup.py` since setuptools is not available in "
              "the build environment.",
              file=sys.stderr,
          )
          sys.exit(1)
      
      __file__ = %r
      sys.argv[0] = __file__
      
      if os.path.exists(__file__):
          filename = __file__
          with tokenize.open(__file__) as f:
              setup_py_code = f.read()
      else:
          filename = "<auto-generated setuptools caller>"
          setup_py_code = "from setuptools import setup; setup()"
      
      exec(compile(setup_py_code, filename, "exec"))
      '"'"''"'"''"'"' % ('"'"'/home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' egg_info --egg-base /tmp/pip-pip-egg-info-uaj5m5_v
      cwd: /home/conda/staged-recipes/build_artifacts/xitorch_1649528643673/work/
      Preparing metadata (setup.py): finished with status 'error'
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    

    Expected behavior

    Install cleanly from sdist.

    Systems:

    • OS: Linux, MacOS, Windows
    • Python version: 3.10
    • PyTorch version: 1.11
    • xitorch version: 0.3.0

    Additional context

    See https://github.com/conda-forge/staged-recipes/pull/18636

    bug 
    opened by awvwgk 0
  • gradient-vector product and vector-gradient product for higher-order gradients

    gradient-vector product and vector-gradient product for higher-order gradients

    Hi,

    I'm trying to see what's the best way to calculate gradient-vector product (gvp) and vector-gradient product (vgp) for hessian, and third-order gradients?

    Thanks, JW

    enhancement 
    opened by exenGT 0
  • A RuntimeError occur when call _Jac.fullmatrix()

    A RuntimeError occur when call _Jac.fullmatrix()

    This occurred when I was implementing an iterative algorithm and calculating gradients. With a call: grad = jac(...), grad.fullmatrix(). But it does not occur when I use grad.H.fullmatrix()

    I tried to print other values of grad, like grad.dfdy, grad.yout, and it works just fine. As the error information says, it might be the gradient of dfdy with respect to grad.v is lost.

    File "/home/tju/zyzh/abinitialTransport/calc/SCF.py", line 182, in backward grad.fullmatrix() File "/home/tju/.conda/envs/abinit/lib/python3.8/site-packages/xitorch/_core/linop.py", line 354, in fullmatrix return self.mm(V) # (B1,B2,...,Bb,np,nq) File "/home/tju/.conda/envs/abinit/lib/python3.8/site-packages/xitorch/_core/linop.py", line 272, in mm ynew = self._mv(xnew) # (r,...,p) File "/home/tju/.conda/envs/abinit/lib/python3.8/site-packages/xitorch/grad/jachess.py", line 167, in _mv dfdyf, = torch.autograd.grad(dfdy, (v,), grad_outputs=gy1[i].reshape(self.inshape), File "/home/tju/.conda/envs/abinit/lib/python3.8/site-packages/torch/autograd/init.py", line 226, in grad return Variable._execution_engine.run_backward( RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

    To Reproduce This is the function I wrote.

    class SCF(torch.autograd.Function): @staticmethod def forward(ctx, fcn, x0, maxIter, err, method='default', *params): # with torch.no_grad(): # x = fcn(x0, *params) x_ = fcn(x0, *params)

        if method == "default":
            it = 0
            old_x = x0
            while (x_-old_x).norm() > err and it < maxIter:
                old_x = x_
                x_ = fcn(x_, *params)
    
        elif method == 'LBFGS':
            optim = LBFGS(params=[x_], tolerance_grad=1e-10, tolerance_change=1e-15)
    
            def new_fcn():
                return (x_ - fcn(x_, *params)).abs().sum()
    
            for i in range(maxIter):
                optim.step(new_fcn)
    
        ctx.save_for_backward(x_, *params)
        ctx.fcn = fcn
    
        return x_
    
    
    
    @staticmethod
    def backward(ctx, grad_outputs):
        x_ = ctx.saved_tensors[0].clone().requires_grad_()
        params = ctx.saved_tensors[1:]
    
        idx = [i for i in range(len(params)) if params[i].requires_grad]
    
    
        fcn = ctx.fcn
        def new_fcn(x, *params):
            return - fcn(x, *params)
    
        grad = jac(fcn=new_fcn, params=(x_, *params), idxs=[0])[0]
        grad.fullmatrix()
    
        pre = tLA.solve(grad.H.fullmatrix().real, -grad_outputs.reshape(-1, 1))
        pre = pre.reshape(grad_outputs.shape)
    
    
        with torch.enable_grad():
            params_copy = [p.clone().requires_grad_() for p in params]
            yfcn = new_fcn(x_, *params_copy)
    
        grad = torch.autograd.grad(yfcn, [params_copy[i] for i in idx], grad_outputs=pre,
                                   create_graph=torch.is_grad_enabled(),
                                   allow_unused=True)
        grad_out = [None for _ in range(len(params))]
        for i in range(len(idx)):
            grad_out[idx[i]] = grad[i]
    
    
        return None, None, None, None, None, *grad_out
    

    Expected behavior Much thanks if there are any helps to find out what is the problem.

    bug 
    opened by floatingCatty 2
  • Rootfinder fails with a simple case

    Rootfinder fails with a simple case

    Describe the bug xitorch.optimize.rootfinder fails with a simple case.

    To Reproduce

    import torch
    from xitorch.optimize import rootfinder
    
    def fcn(x: torch.Tensor, r: torch.Tensor) -> torch.Tensor:
        res0 = x[0] - x[1] - 1.0
        res1 = x[2] + x[3]
        res2 = x[0] - x[1] - x[3] * r
        res3 = x[4] + x[5]
        res4 = x[2] + x[4]
        res5 = x[1]
        return torch.stack([res2, res3, res0, res1, res4, res5])
    
    x0 = torch.zeros(6)
    r = torch.tensor(1.0)
    x = rootfinder(fcn, x0, params=(r,))
    print(x)
    

    Produces

    /mnt/c/Users/firma/Documents/Projects/Git/xitorch/xitorch/_impls/optimize/root/rootsolver.py:163: ConvergenceWarning: The rootfinder does not converge after 700 iterations. Best |dx|=0.000e+00, |f|=1.000e+00 at iter 0
      warnings.warn(ConvergenceWarning(msg))
    

    Expected behavior This should not fail. Using numpy and scipy.optimize.root succeed on this case (even with the same broyden1 method).

    Systems:

    • OS: WSL2 Ubuntu
    • Python version: 3.9.7
    • PyTorch version: 1.10.0
    • xitorch version: 0.4.0.dev0+1e875cc

    Additional context Changing the res order from torch.stack([res2, res3, res0, res1, res4, res5]) to torch.stack([res0, res1, res2, res3, res4, res5]) makes it successful.

    bug 
    opened by mfkasim1 0
  • Incompatibility with pytorch 1.10

    Incompatibility with pytorch 1.10

    Describe the bug Upgrading the pytorch to 1.10 makes some of the test fails (while it succeeds in 1.9 & 1.8).

    To Reproduce Steps to reproduce the behavior:

    1. Install xitorch with pytorch 1.10
    2. Run the test (cd xitorch/_tests/; pytest)
    3. See error
    FAILED test_optimize.py::test_equil[dtype0-device0-DummyModule] - torch.autograd.gradcheck.GradcheckError: Jacobian m...
    FAILED test_optimize.py::test_equil[dtype1-device1-DummyNNModule] - torch.autograd.gradcheck.GradcheckError: Jacobian...
    FAILED test_optimize.py::test_equil[dtype2-device2-DummyModule] - torch.autograd.gradcheck.GradcheckError: While cons...
    

    Expected behavior Seamless transition to 1.10

    Systems:

    • OS: Ubuntu 20.04 WSL
    • Python version: 3.8.5
    • PyTorch version: 1.10
    • xitorch version: 0.4.0.dev0+47631ef

    Additional context

    bug 
    opened by mfkasim1 1
Freecodecamp Scientific Computing with Python Certification; Solution for Challenge 2: Time Calculator

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Hellen Namulinda 0 Feb 26, 2022
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 9, 2023
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning

Sky Computing Introduction Sky Computing is a load-balanced framework for federated learning model parallelism. It adaptively allocate model layers to

HPC-AI Tech 72 Dec 27, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
GT4SD, an open-source library to accelerate hypothesis generation in the scientific discovery process.

The GT4SD (Generative Toolkit for Scientific Discovery) is an open-source platform to accelerate hypothesis generation in the scientific discovery process. It provides a library for making state-of-the-art generative AI models easier to use.

Generative Toolkit 4 Scientific Discovery 142 Dec 24, 2022
A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.

python_graphs This package is for computing graph representations of Python programs for machine learning applications. It includes the following modu

Google Research 258 Dec 29, 2022
Library for implementing reservoir computing models (echo state networks) for multivariate time series classification and clustering.

Framework overview This library allows to quickly implement different architectures based on Reservoir Computing (the family of approaches popularized

Filippo Bianchi 249 Dec 21, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 4, 2023
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

Kexin Huang 49 Oct 15, 2022
A scientific and useful toolbox, which contains practical and effective long-tail related tricks with extensive experimental results

Bag of tricks for long-tailed visual recognition with deep convolutional neural networks This repository is the official PyTorch implementation of AAA

Yong-Shun Zhang 181 Dec 28, 2022
DRIFT is a tool for Diachronic Analysis of Scientific Literature.

About DRIFT is a tool for Diachronic Analysis of Scientific Literature. The application offers user-friendly and customizable utilities for two modes:

Rajaswa Patil 108 Dec 12, 2022
2nd solution of ICDAR 2021 Competition on Scientific Literature Parsing, Task B.

TableMASTER-mmocr Contents About The Project Method Description Dependency Getting Started Prerequisites Installation Usage Data preprocess Train Infe

Jianquan Ye 298 Dec 21, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022
Scientific Computation Methods in C and Python (Open for Hacktoberfest 2021)

Sci - cpy README is a stub. Do expand it. Objective This repository is meant to be a ready reference for scientific computation methods. Do ⭐ it if yo

Sandip Dutta 7 Oct 12, 2022
Citation Intent Classification in scientific papers using the Scicite dataset an Pytorch

Citation Intent Classification Table of Contents About the Project Built With Installation Usage Acknowledgments About The Project Citation Intent Cla

Federico Nocentini 4 Mar 4, 2022