library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

Related tags

Deep Learning nlopt
Overview

Latest Docs Build Status Build Status

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unified interface and packaging of several free/open-source nonlinear optimization libraries.

The latest release can be downloaded from the NLopt releases page on Github, and the NLopt manual is hosted on readthedocs.

NLopt is compiled and installed with the CMake build system (see CMakeLists.txt file for available options):

git clone git://github.com/stevengj/nlopt
cd nlopt
mkdir build
cd build
cmake ..
make
sudo make install

(To build the latest development sources from git, you will need SWIG to generate the Python and Guile bindings.)

Once it is installed, #include <nlopt.h> in your C/C++ programs and link it with -lnlopt -lm. You may need to use a C++ compiler to link in order to include the C++ libraries (which are used internally by NLopt, even though it exports a C API). See the C reference manual.

There are also interfaces for C++, Fortran, Python, Matlab or GNU Octave, OCaml, GNU Guile, GNU R, Lua, Rust, and Julia. Interfaces for other languages may be added in the future.

Comments
  • nlopt compilation failed at

    nlopt compilation failed at "make" step on AIX7.2

    I am trying to compile nlopt on AIX7.2. The 1st "cmake" step finished successfully. However, the 2nd "make" step failed with the ERROR: Undefined symbol: __tls_get_addr. Can you help me to figure out the issue? Thanks.

    [ 66%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/global.cc.o
    [ 68%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/linalg.cc.o
    [ 70%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/local.cc.o
    [ 71%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/stogo.cc.o
    [ 73%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/tools.cc.o
    [ 75%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/evolvent.cc.o
    [ 76%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/solver.cc.o
    [ 78%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/local_optimizer.cc.o
    [ 80%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/ags.cc.o
    [ 81%] Linking CXX shared library libnlopt
    ld: 0711-317 ERROR: Undefined symbol: __tls_get_addr
    ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
    collect2: error: ld returned 8 exit status
    make[2]: *** [CMakeFiles/nlopt.dir/build.make:767: libnlopt.a] Error 1
    make[2]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make[1]: *** [CMakeFiles/Makefile2:179: CMakeFiles/nlopt.dir/all] Error 2
    make[1]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make: *** [Makefile:163: all] Error 2
    
    opened by bergen288 29
  • Prefer target_include_directories in CMake build script

    Prefer target_include_directories in CMake build script

    This PR facilitates the inclusion of nlopt into other CMake projects.

    • The "private" include directories are defined in a per-target basis via target_include_directories instead of include_directories. This has the advantage that it doesn't "pollute" parent projects.
    • Public include directories are set via the INTERFACE argument in target_include_directories. To keep it simple the original header nlopt.h is simply copied in the ${PROJECT_BINARY_DIR}/api folder, which is used as the interface include directory for nlopt. Unfortunatelly, absolute paths cannot be given as interface include directories for installed targets, hence the need for the trick with $<BUILD_INTERFACE:...>. However this generator expression has been introduced in cmake 3.0, so the minimum required version is bumped to 3.0, but maybe you don't want that?
    • Similarly, use target_compile_definitions instead of a global add_definitions.
    • I couldn't make per-target include directories work with SWIG, so there is still a include_directories appearing in the swig/CMakeLists.txt. This is not ideal, but if you have any idea to improve this I'm all ears.

    Now, building nlopt as part of other projects is as simple as

    add_subdirectory(ext/nlopt)
    target_link_libraries(my_program nlopt)
    
    opened by jdumas 15
  • Implement C++ style functors as targets for objectives

    Implement C++ style functors as targets for objectives

    This PR implements a wrapper nlopt::functor_wrapper for C++ style functors via std::function, and two new overloads of nlopt::set_min_objective, nlopt::set_max_objective.

    In order to allow that, a new member field in the myfunc_data struct is added: functor_type functor;, where functor_type is defined as

    typedef std::function<double(unsigned, const double*, double*)> functor_type;
    

    This is not introduced as a pointer (like the other function-pointers are) because std::function is already a container that stores a pointer, and abstracts it away.

    Important: note that the signature for the functor does not include void* data unlike all other function-pointers. That is because it is assumed that the functor already has all the data it needs.

    This PR allows now to write the following:

    class UserDefinedObjective {
      private:
        ImportantData data;
      public:
        UserDefinedObjective() = delete;
        UserDefinedObjective(ImportantData data) :
          data(std::move(data)) {}
        double operator()(unsigned n, const double* x, double* grad) const
        {
          // compute objective(x) and ∇objective(x) using this->data
        }
    };
    
    int main()
    {
      ImportantData data;
      UserDefinedObjective objective(std::move(data));
    
      nlopt::opt optimizer;
      // other nlopt settings
      optimizer.set_max_objective(std::move(objective));
    
      optimizer.optimize(...);
    
      return 0;
    }
    

    Same with C++ lambdas, regular functions and even class member functions (check out std::function).

    This PR also introduces a CMake macro NLOPT_add_cpp_test to quickly add cpp tests, and creates a test cpp_functor.cxx to actually test the new functionality.

    Closes #219 .

    opened by dburov190 14
  • Added example of automatic tests

    Added example of automatic tests

    • Added new_test target to generate test executable
    • Updated CMakeLists.txt to add test as subfolder
    • Added two test using executable new_test
    • Now you can run test suite by just typing either "make test" or "ctest" in the build directory
    opened by boris-il-forte 14
  • C++11 idiom

    C++11 idiom

    I found nlopt a great library, but using it through the C++ is really frustrating. You need to provide a void* for passing data to the objective/constraint functions, with all the problemas that may have.

    Also, you need to pass a function pointer, so you cant use lambda functions with captures (that would avoid passing the void*).

    Ive written a thin wrapper on top of NLopt which tries to offer a more modern c++ API, enabling the use of lambdas and hiding from the API the use of void*.

    Would you be interested in this to be merge?

    opened by jjcasmar 13
  • DIRECT takes impossibly long to reach xtol

    DIRECT takes impossibly long to reach xtol

    Unless I'm mistaken, the XTOL stopping criterion for DIRECT (the cdirect version) can't be used when searching large multi-dimensional spaces, because it requires all hyper-rectangles everywhere to be divided down to below the x-tolerances before stopping. This will take an impossibly long time.

    Wouldn't it make more sense to stop as soon as one (or a few) of the rectangles is small? This could be done by inverting some of the logic for the xtol_reached variable within the cdirect.c function divide_good_rects().

    I can attach some test code here or work towards a pull request if that would be helpful.

    Cheers, Joel

    opened by jcottrell-ellex 11
  • Website & download URLs down

    Website & download URLs down

    I get the following while trying to install:

    configure: Need to download and build NLopt
    trying URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Warning in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  :
      unable to connect to 'ab-initio.mit.edu' on port 80.
    Error in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  : 
      cannot open URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Execution halted
    /bin/tar: This does not look like a tar archive
    
    gzip: stdin: unexpected end of file
    /bin/tar: Child returned status 1
    /bin/tar: Error is not recoverable: exiting now
    

    The website, http://ab-initio.mit.edu/nlopt/, also does not display in my browser (ERR_CONNECTION_TIMED_OUT)

    opened by IljaKroonen 11
  • Prevent a conditional jump based on uninitialized value in nlopt_create.

    Prevent a conditional jump based on uninitialized value in nlopt_create.

    nlopt_set_lower_bounds1 reads from opt->ub before it has ever been written.

    We caught this in our nightly memcheck CI build for RobotLocomotion/drake: https://drake-cdash.csail.mit.edu/viewDynamicAnalysisFile.php?id=14355

    Contributes to RobotLocomotion/drake#3873


    This change is Reviewable

    opened by david-german-tri 11
  • New release

    New release

    I'm running into issue https://github.com/stevengj/nlopt/issues/33. Could you make a new release that includes that fix, please? It would fix dozens of R packages on NixOS.

    Looks like the last one was in 2014!

    opened by langston-barrett 11
  • Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Create missing nlopt.hpp and nlopt.f when building with CMake Use CMake's GNUInstallDIrs (e.g, ${CMAKE_INSTALL_LIBDIR} instead of lib) depending on platform Fix for MSVC 2015 compiler

    opened by rickertm 11
  • Running tests with CTest is broken

    Running tests with CTest is broken

    If I check out NLopt, build it, and try to run the tests with ctest, it fails horribly because testopt was not built.

    The way this works is a non-standard workflow and prevents running the tests e.g. when NLopt is built as a CMake external project.

    Please, either just build testopt by default (i.e. remove EXCLUDE_FROM_ALL), or else add an option (e.g. NLOPT_ENABLE_TESTS) that controls whether testopt is built by default and whether any add_test are invoked.

    opened by mwoehlke-kitware 10
  • undefined reference to `nlopt_get_errmsg'

    undefined reference to `nlopt_get_errmsg'

    Hi,

    I'm getting the following linker error message :

    in function nlopt::opt::get_errmsg() const': Hamiltonian.cpp:(.text._ZNK5nlopt3opt10get_errmsgEv[_ZNK5nlopt3opt10get_errmsgEv]+0x5b): undefined reference tonlopt_get_errmsg' collect2: error: ld returned 1 exit status

    when trying to compile a code which call nlopt.

    I compiled the nlopt 2.7.1 using the make install command and got :

    Install the project...
    -- Install configuration: "Release"
    -- Installing: /usr/local/lib/pkgconfig/nlopt.pc
    -- Installing: /usr/local/include/nlopt.h
    -- Installing: /usr/local/include/nlopt.hpp
    -- Installing: /usr/local/include/nlopt.f
    -- Installing: /usr/local/lib/libnlopt.so.0.11.1
    -- Installing: /usr/local/lib/libnlopt.so.0
    -- Set runtime path of "/usr/local/lib/libnlopt.so.0.11.1" to "/usr/local/lib"
    -- Installing: /usr/local/lib/libnlopt.so
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends-release.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfig.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfigVersion.cmake
    -- Installing: /usr/local/share/man/man3/nlopt.3
    -- Installing: /usr/local/share/man/man3/nlopt_minimize.3
    
    

    and my cmakeList look like

    cmake_minimum_required(VERSION` 3.13.4)
    project(myProject)
    
    set(CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
    set(CMAKE_CXX_STANDARD 20)
    
    add_executable(myProject main.cpp)
    target_link_libraries(myProject PUBLIC nlopt ${CPLEX_LIBRARIES} `${CMAKE_DL_LIBS})
    

    I found that someone already got a similar issue but the proposed fixing does not seem to apply to my settings

    opened by Griset 0
  • Reduce number of gradient calculations in LD-MMA

    Reduce number of gradient calculations in LD-MMA

    The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."

    However, it appears that the implementation of LD-MMA calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.

    I believe this is a two-line change: This line could be changed to something like fcur = f(n, xcur, NULL, f_data);, and then after line 299 in the same file one could add the code if (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }.

    This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.

    I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!

    opened by cpixton 0
  • Simple Academic Use Case with unexpected MMA behavior.

    Simple Academic Use Case with unexpected MMA behavior.

    Hello, Doing some tests on multiple starting guess and multiple optimization algorithm I found a case on which your solver behaves unexpectedly. This is luckily a simple analytical example easily reproducible.

    import nlopt from numpy import array

    def f(x, grad): if grad.size > 0: grad[0] = 2. * (x[0] - 1.) grad[1] = 2. * (x[1] - 1.) return (x[0] - 1.) ** 2. + (x[1] - 1.) ** 2.

    def g(x, grad): if grad.size > 0: grad[0] = 1. grad[1] = 1. return x[0] + x[1] - 1.

    if name == "main": algorithm = nlopt.LD_MMA n = 2 opt = nlopt.opt(algorithm, n) lb = array([0., 0.]) ub = array([1., 1.]) x0 = array([0.25, 1]) opt.set_min_objective(f) opt.set_lower_bounds(lb) opt.set_upper_bounds(ub) opt.add_inequality_constraint(g, 1e-3) tol = 1e-6 maxeval = 50 opt.set_ftol_rel(tol) opt.set_ftol_abs(tol) opt.set_xtol_rel(tol) opt.set_xtol_rel(tol) opt.set_maxeval(maxeval) opt.set_param("verbosity", 10000) opt.set_param("inner_maxeval",10) xopt = opt.optimize(x0) print(xopt) opt_val = opt.last_optimum_value() print(opt_val) result = opt.last_optimize_result() print(result)

    The solution to this problem is simply [0.5, 0.5] correctly spotted by LD_MMA from most initial guess but not with the starting guess [0.25,1].

    In this case the log looks like that:

    MMA dual converged in 6 iterations to g=0.914369: MMA y[0]=1e+40, gc[0]=0.116025 MMA outer iteration: rho -> 0.1 MMA rhoc[0] -> 0.1 MMA dual converged in 3 iterations to g=1.34431: MMA y[0]=1e+40, gc[0]=-0.269712 MMA outer iteration: rho -> 0.01 MMA rhoc[0] -> 0.01 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.6 MMA dual converged in 3 iterations to g=2.23837: MMA y[0]=1e+40, gc[0]=-0.378669 MMA outer iteration: rho -> 0.001 MMA rhoc[0] -> 0.001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.72 MMA dual converged in 3 iterations to g=2.79213: MMA y[0]=1e+40, gc[0]=-0.454075 MMA outer iteration: rho -> 0.0001 MMA rhoc[0] -> 0.0001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.864 MMA dual converged in 3 iterations to g=3.13745: MMA y[0]=1e+40, gc[0]=-0.524222 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.0368 MMA dual converged in 3 iterations to g=2.46249: MMA y[0]=1e+40, gc[0]=-0.587037 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.24416 MMA dual converged in 3 iterations to g=1.81718: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.0001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81722: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81766: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.01 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.82206: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.1 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.86611: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.410949 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=2.01828: MMA y[0]=1e+40, gc[0]=-0.625775 [0.1160254 0.8660254] 0.7993602791855875 3

    Your solver stops to the design point [0.1160254 0.8660254] that is nor a local minimum, nor a saddle point for the objective neither a kkt point . I would like to have your insight on this behavior. BRs Simone Coniglio

    opened by SimoneConiglio 0
  • How to set discrete values for NLopt

    How to set discrete values for NLopt

    hi, i have an optimization problem to solve. i have hundreds of input variables. the values of these variables can only be 1 or 0.
    How I can tell this the NLopt package? i tried it with an equality equation, but it does not work her my little code in R

    opt_test<-function(boundary){
      
      target<-rep(c(0,1),100)
      sum_of_square<-0
      for (i in 1:length(boundary)){
        sum_of_square<-sum_of_square+sum((boundary[i]-target[i])^2)
      }
      #print(boundary)
      #print(sum_of_square)
      return(sum_of_square)
    }
    
    opt_test(rep(c(1,0),100))
    
    eval_g_eq_test<-function(x) {
      
      ret<-1
      for (i in 1:length(x)){
        if (x[i]==1) {ret<-0}
        if (x[i]==0) {ret<-0}
      }
      return(ret)
    }
    
    opts <- list("algorithm"="NLOPT_GN_ISRES"   # NLOPT_GN_ORIG_DIRECT NLOPT_GNL_DIRECT_NOSCAL, NLOPT_GN_DIRECT_L_NOSCAL, and NLOPT_GN_DIRECT_L_RAND_NOSCAL NLOPT_GD_STOGO, or NLOPT_GD_STOGO_RAND
                 #geht gut: NLOPT_LN_PRAXIS   NLOPT_LN_COBYLA  
                 #NLOPT_LN_NEWUOA !!!!! +bound
                 #NLOPT_LN_BOBYQA   !si only
                 # nloptr.print.options()   all possible options
                 ,xtol_rel=1e-8
                 #stopval=as.numeric(stopval),
                 ,maxeval=2000
                 ,print_level=1
    )
    x0<-rep(0,200)
    lb<-rep(0,200)
    ub<-rep(1,200)
    
    jo<- nloptr(x0=x0
                ,eval_f=opt_test
                ,lb = lb
                ,ub = ub
                #,eval_g_eq=eval_g_eq_test()==0
                ,opts=opts
    )
    
    opened by Axyxo 0
  • Error installing `nloptr` from source on CentOS cluster

    Error installing `nloptr` from source on CentOS cluster

    I am trying to install nloptr from source in R version 4.1.3 on a cluster (CentOS). However I receive the following error:

    /cvmfs/argon.hpc.uiowa.edu/2022.1/prefix/usr/lib/gcc/x86_64-pc-linux-gnu/9.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lnlopt
    collect2: error: ld returned 1 exit status
    make: *** [/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/r-4.1.3-ljofaul/rlib/R/share/make/shlib.mk:10: nloptr.so] Error 1
    ERROR: compilation failed for package ‘nloptr’
    

    I am on a university cluster and cannot run sudo commands. After encouragement from @eddelbuettel to contact my sys admin, he figured out the issue. Here's what he wrote:


    The build environment for nloptr uses pkg-config to get information about nlopt. It turns out that the pkg-config file has an error. It has

    libdir=${exec_prefix}/lib

    but the library is actually located in

    libdir=${exec_prefix}/lib64

    That does not show up in the packaging environment because LIBRARY_PATH is set for the dependency chain. I will need to fix the pkg-config file in the package recipe, but you can work around it as follows:

    1. load environment modules:
    module load stack/2022.1
    module load nlopt
    
    1. set LIBRARY_PATH so linker can find library while launching R session (single line below):
    LIBRARY_PATH=$ROOT_NLOPT/lib64:$LIBRARY_PATH R
    
    1. install nloptr in the R console (single line below):
    install.packages(verbose=1,'nloptr')
    

    I originally posted this issue in the nloptr repo: https://github.com/astamm/nloptr/issues/123. However, @eddelbuettel encouraged me to post an issue here because we suspect that the issue may be the pkg-config file created by nlopt.

    Here's the output of some of my commands in CentOS:

    [itpetersen@argon-login-1 ~]$ module load stack/2022.1
    
    The following have been reloaded with a version change:
      1) stack/2020.1 => stack/2022.1
    
    [itpetersen@argon-login-1 ~]$ module load r/4.1.3_gcc-9.4.0
    [itpetersen@argon-login-1 ~]$ module load nlopt
    [itpetersen@argon-login-1 ~]$ R CMD config --all | grep lib64
    LIBnn = lib64
    [itpetersen@argon-login-1 ~]$ pkg-config --libs nlopt
    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib -lnlopt
    

    We think we would want that to be (https://github.com/astamm/nloptr/issues/123#issuecomment-1317199965_):

    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib64 -lnlopt
    

    That is, /lib64 in my case instead of /lib. @eddelbuettel, please clarify if I missed anything or got anything wrong!

    opened by isaactpetersen 4
  • Nonlineay constraints get violated in the result.

    Nonlineay constraints get violated in the result.

    Hi there: During my optimization, I applied LN_COBYLA since it supports arbitrary nonlinear constraint. My "constraint function" is actually a collision avoidance function. The function returns 10.0 when collision hits, so that the constraint should be view as unsatisfied. However, the result in experiment shows the constraint is not satisfied, and the algorithm can still get finished and converge. Can anyone give me some advice? Thanks sincerely! Screenshot from 2022-10-29 21-14-44

    opened by Lbaron980810 2
Releases(v2.7.1)
Owner
Steven G. Johnson
Steven G. Johnson
Cross Quality LFW: A database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments

Cross-Quality Labeled Faces in the Wild (XQLFW) Here, we release the database, evaluation protocol and code for the following paper: Cross Quality LFW

Martin Knoche 10 Dec 12, 2022
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Thorsten Hempel 284 Dec 23, 2022
Official implementation of the MM'21 paper Constrained Graphic Layout Generation via Latent Optimization

[MM'21] Constrained Graphic Layout Generation via Latent Optimization This repository provides the official code for the paper "Constrained Graphic La

Kotaro Kikuchi 73 Dec 27, 2022
PyTorch implementation of Constrained Policy Optimization

PyTorch implementation of Constrained Policy Optimization (CPO) This repository has a simple to understand and use implementation of CPO in PyTorch. A

Sapana Chaudhary 25 Dec 8, 2022
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

null 2 Dec 8, 2022
Learning nonlinear operators via DeepONet

DeepONet: Learning nonlinear operators The source code for the paper Learning nonlinear operators via DeepONet based on the universal approximation th

Lu Lu 239 Jan 2, 2023
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 130 Dec 2, 2022
A python implementation of Physics-informed Spline Learning for nonlinear dynamics discovery

PiSL A python implementation of Physics-informed Spline Learning for nonlinear dynamics discovery. Sun, F., Liu, Y. and Sun, H., 2021. Physics-informe

Fangzheng (Andy) Sun 8 Jul 13, 2022
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

null 7 Mar 28, 2022
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation(mCOLT/mRASP2), ACL2021

Contrastive Learning for Many-to-many Multilingual Neural Machine Translation(mCOLT/mRASP2), ACL2021 The code for training mCOLT/mRASP2, a multilingua

null 104 Jan 1, 2023
M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images

M2MRF: Many-to-Many Reassembly of Features for Tiny Lesion Segmentation in Fundus Images This repo is the official implementation of paper "M2MRF: Man

null 12 Dec 14, 2022
API for RL algorithm design & testing of BCA (Building Control Agent) HVAC on EnergyPlus building energy simulator by wrapping their EMS Python API

RL - EmsPy (work In Progress...) The EmsPy Python package was made to facilitate Reinforcement Learning (RL) algorithm research for developing and tes

null 20 Jan 5, 2023
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

MilaGraph 50 Dec 9, 2022
Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

Zhiliang Peng 378 Jan 8, 2023
Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"

Focal Transformer This is the official implementation of our Focal Transformer -- "Focal Self-attention for Local-Global Interactions in Vision Transf

Microsoft 486 Dec 20, 2022
Decentralized Reinforcment Learning: Global Decision-Making via Local Economic Transactions (ICML 2020)

Decentralized Reinforcement Learning This is the code complementing the paper Decentralized Reinforcment Learning: Global Decision-Making via Local Ec

null 40 Oct 30, 2022
Pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model'

RTK-PAD This is an official pytorch implementation of 'Fingerprint Presentation Attack Detector Using Global-Local Model', which is accepted by IEEE T

null 6 Aug 1, 2022
Losslandscapetaxonomy - Taxonomizing local versus global structure in neural network loss landscapes

Taxonomizing local versus global structure in neural network loss landscapes Int

Yaoqing Yang 8 Dec 30, 2022