A toolkit for making real world machine learning and data analysis applications in C++

Overview

dlib C++ library Travis Status

Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. See http://dlib.net for the main project documentation and API reference.

Compiling dlib C++ example programs

Go into the examples folder and type:

mkdir build; cd build; cmake .. ; cmake --build .

That will build all the examples. If you have a CPU that supports AVX instructions then turn them on like this:

mkdir build; cd build; cmake .. -DUSE_AVX_INSTRUCTIONS=1; cmake --build .

Doing so will make some things run faster.

Finally, Visual Studio users should usually do everything in 64bit mode. By default Visual Studio is 32bit, both in its outputs and its own execution, so you have to explicitly tell it to use 64bits. Since it's not the 1990s anymore you probably want to use 64bits. Do that with a cmake invocation like this:

cmake .. -G "Visual Studio 14 2015 Win64" -T host=x64 

Compiling your own C++ programs that use dlib

The examples folder has a CMake tutorial that tells you what to do. There are also additional instructions on the dlib web site.

Alternatively, if you are using the vcpkg dependency manager you can download and install dlib with CMake integration in a single command:

vcpkg install dlib

Compiling dlib Python API

Before you can run the Python example programs you must compile dlib. Type:

python setup.py install

Running the unit test suite

Type the following to compile and run the dlib unit test suite:

cd dlib/test
mkdir build
cd build
cmake ..
cmake --build . --config Release
./dtest --runall

Note that on windows your compiler might put the test executable in a subfolder called Release. If that's the case then you have to go to that folder before running the test.

This library is licensed under the Boost Software License, which can be found in dlib/LICENSE.txt. The long and short of the license is that you can use dlib however you like, even in closed source commercial software.

dlib sponsors

This research is based in part upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) under contract number 2014-14071600010. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the U.S. Government.

Comments
  • YOLO loss

    YOLO loss

    Hi, I've been spending the last few days trying to make a loss_yolo layer for dlib, in particular the loss presented in the YOLOv3 paper.

    I think I came up with a pretty straightforward implementation but, as of now, it still does not work.

    I wondered if you could have a look. I am quite confident the loss implementation is correct, however I think I might be making some assumptions about the dlib API when the loss layer takes several inputs from the network.

    I tried to make the loss similar to the loss_mmod in the way you set the options of the layer, etc.

    So, my question is, does this way of coding the loss in dlib make sense for multiple outputs? Or is dlib doing something I don't expect?

    There's also a simple example program that takes a path containing a training.xml file (like the one from the face or vehicle detection examples).

    Thanks in advance :)

    opened by arrufat 105
  • QUESTION : is yolov3 possible in DLIB

    QUESTION : is yolov3 possible in DLIB

    I am trying to define yolov3 using dlib's dnn module. I'm stuck with the darknet53 backbone, as I want it to output the outputs of the last three layers. So far i have this:

    using namespace dlib;
                        
    template <int outc, int kern, int stride, typename SUBNET> 
    using conv_block = leaky_relu<affine<con<outc,kern,kern,stride,stride,SUBNET>>>;
    
    template <int inc, typename SUBNET>
    using resblock = add_prev1<conv_block<inc,3,1,conv_block<inc/2,1,1,tag1<SUBNET>>>>;
    
    template<int nblocks, int outc, typename SUBNET>
    using conv_resblock = repeat<nblocks, resblock<outc,
                          conv_block<outc, 3, 2, SUBNET>>>;
    
    template<typename SUBNET>
    using darknet53 = tag3<conv_resblock<4, 1024,
                      tag2<conv_resblock<8, 512,
                      tag1<conv_resblock<8, 256,
                      conv_resblock<2, 128,
                      conv_resblock<1, 64,
                      conv_block<32, 3, SUBNET
                      >>>>>>>>>;
    

    Is it possible for darknet53 to output tag1, tag2 and tag3?

    opened by pfeatherstone 83
  • Optimize dlib for POWER8 VSX

    Optimize dlib for POWER8 VSX

    Enable and optimize support for POWER8 VSX SIMD instructions on PPC64LE Linux to dlib/simd.

    $$$$ Financial bounties available. Any reasonable suggested value will be seriously considered.

    I welcome contact / replies from developers in the dlib community who are interested to work on this project.

    paid-bounty 
    opened by edelsohn 79
  • Example of DCGAN

    Example of DCGAN

    Hi, I would like to contribute a DCGAN example to dlib.

    I have implemented a version of Pytorch DCGAN for C++.

    However, I would need some guidance with some things I don't know how to do. I am wondering on how I should proceed. Should I attach my current code here (around 150 lines), or make a pull request, even if the code is not able to learn anything? Maybe @edubois can help out, since he stated that he managed to make it work on https://github.com/davisking/dlib/issues/1261

    Thanks for your hard work on dlib.

    enhancement 
    opened by arrufat 67
  • Arbitrary sized FFTs using modified kissFFT as default backend and MKL otherwise

    Arbitrary sized FFTs using modified kissFFT as default backend and MKL otherwise

    This PR adds a C++ port of kissFFT as the default backend of dlib's fft routines. All the kiss code is inserted into the dlib namespace to avoid conflicts with user code that may or may not be using original (C) kissFFT. All MKL fft code is put into it's own separate header file. Now dlib/matrix_fft.h simply calls the right wrappers depending on a pre-processor macro (as before). I've removed the FFTW wrapper code as per @davisking's request as the fftw planner isn't thread safe. I've also removed the original fft backend code, again, as per @davisking's request.

    Note that this PR isn't quite ready yet. Need more unit tests. Not quite sure why MKL wrappers only worked for matrix<complex<double>> before. There's no reason why it couldn't work for matrix<complex<float>> if using DFTI_SINGLE instead of DFTI_DOUBLE. (maybe user code had to do a cast?) So need more units tests for MKL wrappers.

    Not quite happy with the number of copies at the moment. When passing const matrix_exp<EXP>& data as input, there are 2 copies: one to evaluate the matrix expression to matrix<typename EXP::type> and one to do either the in-place fft (copy is done in the implementation details of the fft functions) or out-of-place fft (copy is done explicitly in dlib/matrix_fft.h). For example, if you want to do the fft of std::vector<std::complex>, you have to use dlib::mat which returns a matrix expression then dlib::fft will evaluate that to dlib::matrix<std::complex<float>>. So there is a copy of std::vector to dlib::matrix that is unnecessary. Could add some function overloads to support std::vector. Don't know what's the best thing to do without dirtying the API. Could benefit from @davisking's advice.

    I think if we've gone this far, we should support real FFTs so the requirement that input should be complex vanishes. Output will be complex though. Both KISS and MKL support this so there shouldn't be a lot of leg work. More unit tests.

    opened by pfeatherstone 56
  • Add dnn self supervised learning example

    Add dnn self supervised learning example

    Hi, recently I've been interested in self-supervised learning (SSL) methods in deep learning. I noticed that dlib has support for unsupervised loss layers by just leaving out the training_label_type typedef and the truth iterator argument to compute_loss_value_and_gradient().

    However, most methods I've read about and tried are quite complicated (hard-negative mining, predictors, stop gradient, exponential moving average of the weights between both architectures) due to the need of breaking the symmetry between both branches to avoid collapse. But recently, I came across a simple method named Barlow Twins: Self-Supervised Learning via Redundancy Reduction. The idea is as follows:

    1. forward two augmented versions of an image
    2. compute the empirical cross-correlation matrix between both feature representations
    3. make that matrix as close to the identity as possible.

    That prevents collapse, since it's making each individual dimension of the feature vectors to focus on different tasks, and thus avoiding redundancies between dimensions (it needs a high dimension for it to work, since it relies on sparse representations).

    So far, I've implemented:

    input_rgb_image_pair

    It takes a std::vector<std::pair<matrix<rgb_pixel>, matrix<rgb_pixel>>> and puts the first elements of the pairs in the first half of the batch and the second elements of the batch in the second half of the batch. This allows computing batch normalization on each half efficiently (done on the loss layer).

    loss_barlow_twins

    I tried to follow the official paper as close as possible, and made use of the awesome Matrix Calculus site for the gradients (having element wise operations, (off-)diagonal terms, and summations was a bit tedious to do manually 😅)

    But… I am experiencing some difficulties:

    • if I use a dnn_trainer I get a segfault (but if I train manually, like in the code, it works: at least the loss goes down)
    • if I use a batch size larger than 64, I also get a segfault (but I have plenty of RAM/VRAM)

    So maybe I misunderstood something about how to implement the input layer or the unsupervised loss layer… If you could have a look at some point… You can just run the example by giving a path to a folder containing the CIFAR-10 dataset.

    It would be great to have an example of SSL on dlib.

    Thanks in advance (all the code is inside the example program, I will make a proper PR if we manage to get this work, and you are interested, since the method is fairly new)

    opened by arrufat 54
  • Add support for fused convolutions

    Add support for fused convolutions

    I've been playing a bit with the idea of having fused convolutions (convolution + batch_norm) in dlib. I think the first step would be to move all the operations that are done by the affine_ layer into the convolution, that is, update the bias of the convolution and re-scale the filters.

    This PR adds some helper methods that allow doing this. The next step could be adding a new layer that can be constructed from an affine_ layer and it's a no-op, like the tag layers, or add a version of the affine layer that does nothing (just outputs its input, without copying or anything). How would you approach this?

    Finally, here's an example that uses a visitor to update the convolutions that are below an affine layer. It can be build from by putting the file into the examples folder and loading the pretrained resnet 50 from the dnn_introduction3_ex.cpp. If we manage to make something interesting out of it, maybe it would be interesting to have this visitor, too.

    #include "resnet.h"
    
    #include <dlib/dnn.h>
    #include <dlib/image_io.h>
    
    using namespace std;
    using namespace dlib;
    
    class visitor_fuse_convolutions
    {
        public:
        template <typename T> void fuse_convolutions(T&) const
        {
            // disable other layer types
        }
    
        // handle the standard case (convolutional layer followed by affine;
        template <long nf, long nr, long nc, int sy, int sx, int py, int px, typename U, typename E>
        void fuse_convolutions(add_layer<affine_, add_layer<con_<nf, nr, nc, sy, sx, py, px>, U>, E>& l)
        {
            // get the parameters from the affine layer as alias_tensor_instance
            auto gamma = l.layer_details().get_gamma();
            auto beta = l.layer_details().get_beta();
    
            // get the convolution below the affine layer and its paramaters
            auto& conv = l.subnet().layer_details();
            const long num_filters_out = conv.num_filters();
            const long num_rows = conv.nr();
            const long num_cols = conv.nc();
            tensor& params = conv.get_layer_params();
            // guess the number of input filters
            long num_filters_in;
            if (conv.bias_is_disabled())
                num_filters_in = params.size() / num_filters_out / num_rows / num_cols;
            else
                num_filters_in = (params.size() - num_filters_out) / num_filters_out / num_rows / num_cols;
    
            // set the new number of parameters for this convolution
            const size_t num_params = num_filters_in * num_filters_out * num_rows * num_cols + num_filters_out;
            alias_tensor filters(num_filters_out, num_filters_in, num_rows, num_cols);
            alias_tensor biases(1, num_filters_out);
            if (conv.bias_is_disabled())
            {
                conv.enable_bias();
                resizable_tensor new_params = params;
                new_params.set_size(num_params);
                biases(new_params, filters.size()) = 0;
                params = new_params;
            }
    
            // update the biases
            auto b = biases(params, filters.size());
            b+= mat(beta);
    
            // rescale the filters
            DLIB_CASSERT(filters.num_samples() == gamma.k());
            auto t = filters(params, 0);
            float* f = t.host();
            const float* g = gamma.host();
            for (long n = 0; n < filters.num_samples(); ++n)
            {
                for (long k = 0; k < filters.k(); ++k)
                {
                    for (long r = 0; r < filters.nr(); ++r)
                    {
                        for (long c = 0; c < filters.nc(); ++c)
                        {
                            f[tensor_index(t, n, k, r, c)] *= g[n];
                        }
                    }
                }
            }
    
            // reset the affine layer
            gamma = 1;
            beta = 0;
        }
    
        template <typename input_layer_type>
        void operator()(size_t , input_layer_type& l) const
        {
            // ignore other layers
        }
    
        template <typename T, typename U, typename E>
        void operator()(size_t , add_layer<T, U, E>& l)
        {
            fuse_convolutions(l);
        }
    };
    
    int main(const int argc, const char** argv)
    try
    {
        resnet::infer_50 net1, net2;
        std::vector<std::string> labels;
        deserialize("resnet50_1000_imagenet_classifier.dnn") >> net1 >> labels;
        net2 = net1;
        matrix<rgb_pixel> image;
        load_image(image, "elephant.jpg");
    
        const auto& label1 = labels[net1(image)];
        const auto& out1 = net1.subnet().get_output();
        resizable_tensor probs(out1);
        tt::softmax(probs, out1);
        cout << "pred1: " << label1 << " (" << max(mat(probs)) << ")" << endl;
    
    
        // fuse the convolutions in the network
        dlib::visit_layers_backwards(net2, visitor_fuse_convolutions());
        const auto& label2 = labels[net2(image)];
        const auto& out2 = net2.subnet().get_output();
        tt::softmax(probs, out2);
        cout << "pred2: " << label2 << " (" << max(mat(probs)) << ")" << endl;
    
        cout << "max abs difference: " << max(abs(mat(out1) - mat(out2))) << endl;
        DLIB_CASSERT(max(abs(mat(out1) - mat(out2))) < 1e-2);
    }
    catch (const exception& e)
    {
        cout << e.what() << endl;
        return EXIT_FAILURE;
    }
    

    output with this image (elephant.jpg): elephant

    pred1: African_elephant (0.962677)
    pred2: African_elephant (0.962623)
    max abs difference: 0.00436211
    

    UPDATE: make visitor more generic and show a results with a real image

    enhancement 
    opened by arrufat 53
  • Semantic Segmentation Functionality

    Semantic Segmentation Functionality

    I'm interested in using dlib for semantic segmentation. I think the only necessary features would be:

    • Loss function - it doesn't look like loss_multiclass_log would work for this
    • "Unpooling" upsampling layer - there are a few ways to do this

    Is this something you're interested in supporting? I'm planning to add this functionality regardless, just wanted to check in.

    Regarding implementation, would the upsampling be something you would want added to the pooling class or its own upsample class?

    enhancement help wanted 
    opened by davidmascharka 53
  • Adding an install target to dlib's CMakeLists

    Adding an install target to dlib's CMakeLists

    This PR covers the first part of items discussed in #34, namely, the installation.

    A follow-up PR (hopefully tomorrow) should cover the 2nd part, namely, the generation and installation of dlibConfig.cmake

    opened by severin-lemaignan 51
  • Trying to compile dlib 19.20 with cuda 11 and cudnn 8, or cuda 10.1 and cudnn 7.6.4

    Trying to compile dlib 19.20 with cuda 11 and cudnn 8, or cuda 10.1 and cudnn 7.6.4

    Environment: Pop-Os (Ubuntu 20.04) Gcc 9 and 8 (tried both) Cmake 3.16.3 dlib 19.20.99 python 3.8.2 NVIDIA Quadro P600

    Expected Behavior

    Compiling with cuda..

    Current Behavior

    ..... -- Found cuDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so -- Building a CUDA test project to see if your compiler is compatible with CUDA... -- Checking if you have the right version of cuDNN installed. -- *** Found cuDNN, but it looks like the wrong version so dlib will not use it. *** -- *** Dlib requires cuDNN V5.0 OR GREATER. Since cuDNN is not found DLIB WILL NOT USE CUDA. .....

    • Where did you get dlib: git clone https://github.com/davisking/dlib.git

    I first tried downloading the latest (and suggested) version of cuda from Nvidia site, which was Cuda 11. then downloaded cuDNN for Cuda 11, which was 8.0.0 (runtime and dev .deb packets) installed them following Nvidia method.

    when compiling "cudnn_samples_v8" everything works, so I think installations went ok. but no way to get dlib compiled with Cuda.

    I've tried to uninstall cuda 11 and cudnn 8 and install cuda 10.1 and cudnn 7.6 (suggested for cuda 10.1) but the result is the same. Every time I erased the build directory, and compiled in 2 ways: as per dlib instructions, using: $ sudo python3 setup.py install or: $ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1

    Any suggestion? Thanks.

    inactive 
    opened by spiderdab 48
  • Arbitrary FFT sizes

    Arbitrary FFT sizes

    Feature suggestion: allow the fft routine to do arbitrary sized FFTs. Let FFTW, lapack, BLAS or whatever do the magic of choosing the correct kernels for the input

    enhancement 
    opened by pfeatherstone 47
  • Why library doesn't have face_recognition.face_decoding()

    Why library doesn't have face_recognition.face_decoding()

    Need little guidance about the formation/generation of image from the encoded-face model, my face_recognition.face_encoding() has helped a lot, I want to know if I can convert back the encoded face-model with the person's face image, and what all steps it required to convert an image back from the encoded saved face-model.

    Please guide

    • Version:
    • Where did you get dlib:
    • Platform:
    • Compiler:
    opened by RoshanCubastion 0
  • dlib import error on termux

    dlib import error on termux

    I try to run a program but it returns an error

    ################################### Traceback (most recent call last): File "/data/data/com.termux/files/home/./faceswap.py", line 47, in import dlib File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/dlib-19.24.99-py3.11-linux-aarch64.egg/dlib/init.py", line 19, in from _dlib_pybind11 import * ImportError: dlopen failed: cannot locate symbol "PyExc_ImportError" referenced by "/data/data/com.termux/files/usr/lib/python3.11/site-packages/dlib-19.24.99-py3.11-linux-aarch64.egg/_dlib_pybind11.cpython-311.so"... ###################################

    then,I try to replace "from _dlib_pybind11 import *" with "#from _dlib_pybind11 import".But another error appeared

    ###################################Traceback (most recent call last): File "/data/data/com.termux/files/home/./faceswap.py", line 47, in import dlib File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/dlib-19.24.99-py3.11-linux-aarch64.egg/dlib/init.py", line 20, in from _dlib_pybind11 import version, time_compiled ImportError: dlopen failed: cannot locate symbol "PyExc_ImportError" referenced by "/data/data/com.termux/files/usr/lib/python3.11/site-packages/dlib-19.24.99-py3.11-linux-aarch64.egg/_dlib_pybind11.cpython-311.so"... ###################################

    How to fix it?

    dlib version:19.24.99 python version:3.11.1 termux version:0.118.21 android version:12

    opened by okbyxray 0
  • Build dlib stucks forever with no error logs

    Build dlib stucks forever with no error logs

    Dlib build gets stuck forever

    Expected Behavior

    It should at least print out more details why it gets stuck at those lines of code. Or it should not stuck too long like that.

    Current Behavior

    Stucks at these lines

     => [stage-1  9/16] RUN pip install --no-cache-dir dlib -vvv                                                                                                                                                                                           506.9s
     => => #                    from /tmp/pip-install-2d_opbew/dlib_93a3736d4a0d4fcaa59f51c92234fb97/dlib/../dlib/python.h:6,
     => => #                    from /tmp/pip-install-2d_opbew/dlib_93a3736d4a0d4fcaa59f51c92234fb97/tools/python/src/opaque_types.h:6,
     => => #                    from /tmp/pip-install-2d_opbew/dlib_93a3736d4a0d4fcaa59f51c92234fb97/tools/python/src/face_recognition.cpp:4:
     => => #   /usr/local/include/python3.9/ceval.h:130:37: note: declared here
     => => #     130 | Py_DEPRECATED(3.9) PyAPI_FUNC(void) PyEval_InitThreads(void);
     => => #         |
    

    Steps to Reproduce

    • Version: Latest (since I run pip install dlib)
    • Where did you get dlib: From official source
    • Platform: I use img to build a Dockerimage, inside the Dockerfile there is a step to install dlib using pip
    FROM python:3.9.14
    # Install base packages for Python
    RUN pip install --no-cache-dir huggingface_hub flask numpy pandas scipy matplotlib Pillow cython \
        torch torchvision cmake virtualenv ndg-httpsclient ldm h5py datasets
    RUN pip install --no-cache-dir dlib -vvv
    

    img tool: https://github.com/genuinetools/img

    opened by MrNocTV 3
  • Example variadic Python functions in find_max_global

    Example variadic Python functions in find_max_global

    Hi

    Please provide me the example for using the find_max_global with non fixed number of arguyments. The current example present doesn't show the functionality

    Thanks, Ankit

    opened by AnkitAggarwalAlphagrep 1
  • test_shape_predictor function returns a confusing value

    test_shape_predictor function returns a confusing value

    As the http://dlib.net/ml.html#test_shape_predictor says, Tests a shape_predictor's ability to correctly predict the part locations of objects. The output is the average distance (measured in pixels) between each part and its true location.

    I simply ran print("\Testing accuracy:{0}".format(dlib.test_shape_predictor(testing_xml_path, "modelbk/predictor.dat"))) for my own model and print("\Testing accuracy:{0}".format(dlib.test_shape_predictor(testing_xml_path, "shape_predictor_68_face_landmarks.dat"))) for a pretrained model. and get the Testing accuracy 81.7678 and 322.52788 for my own model and pretrained model, how to understand these numbers? My own model cannot give the correct locations.

    What's more, my own model was trained with "options.tree_depth = 2", which in my opinion caused the model size far smaller than the pretrained "shape_predictor_68_face_landmarks.dat", can I test these two model in the above way?

    opened by young169 0
  • test_object_detection_function() returning wrong results on test data

    test_object_detection_function() returning wrong results on test data

    When i use the test_object_detection_function() on train data, after the training was done, it gives 1 0.967433 0.967433. But when i use it on test images (im using 5 images from the faces folder as test), it gives 1 0.16 0.16. But i know it shouldn't be 0.16 of recall because when i use the testing loop as in dnn_mmod_ex.cpp, from the 25 faces of these images, it only fails to find 2. Which should mean around 0.92 of recall. Additionally, every other picture that i test, that wasn't used in the training, shows good results, finding almost every face. I would like to know if im doing something wrong.

    Thanks!

    Edit: i just printed all the results on detections on the testing loop, and i noticed that the coordinates of detections doesn't make sense. Then i realized the pyramid_up() may actually be messing with all the coordinates, since it scales up the image. That may be the reason test_object_detection_function() cannot find the testing boxes to compare.

    opened by CaioFPeres 0
Owner
Davis E. King
Davis E. King
Trained on Simulated Data, Tested in the Real World

Trained on Simulated Data, Tested in the Real World

livox 43 Nov 18, 2022
A real world application of a Recurrent Neural Network on a binary classification of time series data

What is this This is a real world application of a Recurrent Neural Network on a binary classification of time series data. This project includes data

Josep Maria Salvia Hornos 2 Jan 30, 2022
Pose Detection and Machine Learning for real-time body posture analysis during exercise to provide audiovisual feedback on improvement of form.

Posture: Pose Tracking and Machine Learning for prescribing corrective suggestions to improve posture and form while exercising. This repository conta

Pratham Mehta 10 Nov 11, 2022
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 7, 2023
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Laura Smith 70 Dec 7, 2022
A Real-World Benchmark for Reinforcement Learning based Recommender System

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System RL4RS is a real-world deep reinforcement learning recommender system

null 121 Dec 1, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17k Feb 11, 2021
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Guanying Chen 64 Nov 19, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 2, 2023
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

null 81 Dec 28, 2022
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

seominseok 62 Dec 8, 2022
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

BCMI 105 Dec 30, 2022
Hso-groupie - A pwnable challenge in Real World CTF 4th

Hso-groupie - A pwnable challenge in Real World CTF 4th

Riatre Foo 42 Dec 5, 2022
Make your master artistic punk avatar through machine learning world famous paintings.

Master-art-punk Make your master artistic punk avatar through machine learning world famous paintings. 通过机器学习世界名画制作属于你的大师级艺术朋克头像 Nowadays, NFT is beco

Philipjhc 53 Dec 27, 2022
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2020 Links Doc

Sebastian Raschka 4.2k Jan 2, 2023
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023