Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Overview

Real-ESRGAN

download Open issue LICENSE python lint

  1. Colab Demo for Real-ESRGAN google colab logo.
  2. Portable Windows executable file. You can find more information here.

Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.

🚩 The training codes have been released. A detailed guide can be found in Training.md.

📖 Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

[Paper]   [Project Page]   [Demo]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Applied Research Center (ARC), Tencent PCG
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences


We have provided a pretrained model (RealESRGAN_x4plus.pth) with upsampling X4.
Note that RealESRGAN may still fail in some cases as the real-world degradations are really too complex.
Moreover, it may not perform well on human faces, text, etc, which will be optimized later.

Real-ESRGAN will be a long-term supported project (in my current plan 😃 ). It will be continuously updated in my spare time.

Here is a TODO list in the near future:

  • optimize for human faces
  • optimize for texts
  • optimize for animation images
  • support more scales
  • support controllable restoration strength

If you have any good ideas or demands, please open an issue/discussion to let me know.
If you have some images that Real-ESRGAN could not well restored, please also open an issue/discussion. I will record it (but I cannot guarantee to resolve it 😛 ). If necessary, I will open a page to specially record these real-world cases that need to be solved, but the current technology is difficult to handle well.


Portable executable files

You can download Windows executable files from https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRGAN-ncnn-vulkan-20210725-windows.zip

This executable file is portable and includes all the binaries and models required. No CUDA or PyTorch environment is needed.

You can simply run the following command:

./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png

We have provided three models:

  1. realesrgan-x4plus (default)
  2. realesrnet-x4plus
  3. esrgan-x4

You can use the -n argument for other models, for example, ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus

Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.

This executable file is based on the wonderful Tencent/ncnn and realsr-ncnn-vulkan by nihui.


🔧 Dependencies and Installation

Installation

  1. Clone repo

    git clone https://github.com/xinntao/Real-ESRGAN.git
    cd Real-ESRGAN
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    pip install -r requirements.txt

Quick Inference

Download pre-trained models: RealESRGAN_x4plus.pth

Download pretrained models:

wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models

Inference!

python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs

Results are in the results folder

🏰 Model Zoo

💻 Training

A detailed guide can be found in Training.md.

BibTeX

@Article{wang2021realesrgan,
    title={Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
    author={Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
    journal={arXiv:2107.10833},
    year={2021}
}

📧 Contact

If you have any question, please email [email protected] or [email protected].

Comments
  • Unable to use my own trained models.

    Unable to use my own trained models.

    Hello,

    I updated to the last repo version and now I'm not able to use the models I have just trained.

    Before I used the argument --input_path to indicate the trained model path. Now --input_path exists no more and I'm moving the trained models to rsgan/weights as indicates the new inference_realsrgan.py but then I get an error like the following:

    Traceback (most recent call last): File "inference_realesrgan.py", line 128, in main() File "inference_realesrgan.py", line 75, in main scale=netscale, UnboundLocalError: local variable 'netscale' referenced before assignment

    Any idea how should I use the weights obtained by training or fine-tuning with my own data?

    opened by SirSykon 11
  • refactor inference_realesrgan_video.py

    refactor inference_realesrgan_video.py

    Avoid outputs temporary file by streaming processing. remove some feature:

    • image/folder input
    • image/folder output
    • timer. tqdm can do it

    input argument change:

    • add 'ffmpeg_bin'
    • remove 'video'
    • remove 'audio'
    • remove 'ext'

    issue #212

    opened by Juszoe 9
  • 关于输出的图片全为黑色的问题

    关于输出的图片全为黑色的问题

    一、运行环境: 操作系统:Windows 11 专业版 21H2 处理器:Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz 2.90 GHz 显卡信息:NVIDIA GeForce RTX 1660Ti

    (gfpgan) > nvidia-smi
    Sat May 14 16:53:18 2022
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 512.59       Driver Version: 512.59       CUDA Version: 11.6     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA GeForce ... WDDM  | 00000000:01:00.0 Off |                  N/A |
    | 34%   30C    P8    18W / 130W |    509MiB /  6144MiB |      2%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |    0   N/A  N/A       576    C+G   ...n1h2txyewy\SearchHost.exe    N/A      |
    |    0   N/A  N/A      1552    C+G   C:\Windows\System32\dwm.exe     N/A      |
    |    0   N/A  N/A      3100    C+G   C:\Windows\explorer.exe         N/A      |
    |    0   N/A  N/A      3376    C+G   ...artMenuExperienceHost.exe    N/A      |
    |    0   N/A  N/A      3896    C+G   ...8bbwe\WindowsTerminal.exe    N/A      |
    |    0   N/A  N/A      4120    C+G   C:\Windows\System32\dwm.exe     N/A      |
    |    0   N/A  N/A      5608    C+G   ...y\ShellExperienceHost.exe    N/A      |
    |    0   N/A  N/A      6516    C+G   ...dows\System32\LogonUI.exe    N/A      |
    |    0   N/A  N/A      6636    C+G   ...ge\Application\msedge.exe    N/A      |
    |    0   N/A  N/A      7324    C+G   ...2txyewy\TextInputHost.exe    N/A      |
    |    0   N/A  N/A      8444    C+G   ...210.39\msedgewebview2.exe    N/A      |
    |    0   N/A  N/A     10088    C+G   ...ows\System32\WUDFHost.exe    N/A      |
    |    0   N/A  N/A     10160    C+G   ...lPanel\SystemSettings.exe    N/A      |
    |    0   N/A  N/A     10424    C+G   ...ekyb3d8bbwe\YourPhone.exe    N/A      |
    |    0   N/A  N/A     12744    C+G   ...210.39\msedgewebview2.exe    N/A      |
    +-----------------------------------------------------------------------------+
    
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2022 NVIDIA Corporation
    Built on Tue_Mar__8_18:36:24_Pacific_Standard_Time_2022
    Cuda compilation tools, release 11.6, V11.6.124
    Build cuda_11.6.r11.6/compiler.31057947_0
    

    二、相关软件 Anaconda 4.10.3
    Python 3.8.13 pytorch 1.11.0

    三、使用过程 最开始是使用 GFPGAN 项目,按教程安装好,用示例的命令(python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2)进行测试,脸部分割、优化正常,但是输出的 restored_imgs 只有脸部,背景全为黑色。

    关闭-bg_upsampler(指定一个不存在的smapler)后,restored_imgs里的图片输出正常,脸部得到优化,但背景还是模糊的(符合预期,因为没有使用背景优化),确定为realesrgan运行不正常。

    下载了Real-ESRGAN的绿色版,功能正常,不会出现图片为黑色的情况。

    克隆此项目的代码,按教程安装,用示例的命令(python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance)进行测试,输出的图片全为黑色。

    为排除操作系统问题,在wsl2下安装Ubuntu 20.04.4 LTS,再次进行上面的测试,问题一样。

    搜索了issues,没有发现相关的问题,希望能得到帮助。

    非常感谢,这个几个项目太棒了!

    opened by rayeesoft 8
  • 盲道中 sinc模块做振铃和过冲伪像

    盲道中 sinc模块做振铃和过冲伪像

    因为我像看看sinc 模块最终生成图像的样子,所以我想将sinc 模块提取如下 我想将sinc 模块提出来, 删除了作者的注释,现在如下,可是我cv2.imshow的结果图却是白板,您能帮我瞅瞅我哪儿的处理做错了

    def circular_lowpass_kernel(cutoff, kernel_size, pad_to=0):
        assert kernel_size % 2 == 1, 'Kernel size must be an odd number.'
        kernel = np.fromfunction(
            lambda x, y: cutoff * special.j1(cutoff * np.sqrt(
                (x - (kernel_size - 1) / 2) ** 2 + (y - (kernel_size - 1) / 2) ** 2)) / (2 * np.pi * np.sqrt(
                (x - (kernel_size - 1) / 2) ** 2 + (y - (kernel_size - 1) / 2) ** 2)), [kernel_size, kernel_size])
        kernel[(kernel_size - 1) // 2, (kernel_size - 1) // 2] = cutoff ** 2 / (4 * np.pi)
        kernel = kernel / np.sum(kernel)
        if pad_to > kernel_size:
            pad_size = (pad_to - kernel_size) // 2
            kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
        return kernel
    
    
    def filter2D(img, kernel):
        img = torch.FloatTensor(img)
        kernel = torch.FloatTensor(kernel)
        k = kernel.shape[-1]
        assert k % 2 == 1, 'Kernel size must be an odd number.'
        h, w, c = img.shape
        img = img.view(1, c, h, w)
        if k % 2 == 1:
            img = F.pad(img, (k // 2, k // 2, k // 2, k // 2), mode='reflect')  # padding
        else:
            raise ValueError('Wrong kernel size')
        ph, pw = img.size()[-2:]
        device = torch.device('cuda')
        if kernel.size(0) == 1:
            # apply the same kenrel to all batch images
            img = img.view(1 * c, 1, ph, pw).to(device)
            kernel = kernel.view(1, 1, k, k).to(device)
            return F.conv2d(img, kernel, padding=0).view(h, w, c).cpu().numpy().clip(0, 255)
        else:
            img = img.view(1, c, ph, pw).to(device)
            kernel = kernel.view(1, 1, k, k).repeat(1, c, 1, 1).view(c, 1, k, k).to(device)
            return F.conv2d(img, kernel, groups=c).view(h, w, c).cpu().numpy().clip(0, 255)
    img = cv2.imread("./1111.jpg")
    sinc_kernel_size = random.choice([7, 9, 11, 13, 15, 17, 19, 21])
    omega_c = np.random.uniform(np.pi / 3, np.pi)
    sinc_kernel = circular_lowpass_kernel(omega_c, sinc_kernel_size, pad_to=21)
    cv2.imshow("sinc_kernel", sinc_kernel)
    cv2.imshow("img0", img)
    res_img = filter2D(img, sinc_kernel)
    cv2.imshow("res_img", res_img)
    cv2.waitKey(0)
    
    opened by tongchangD 8
  • Error related to File/Video Import Widget

    Error related to File/Video Import Widget

    MessageError Traceback (most recent call last) in () 14 15 # upload images ---> 16 uploaded = files.upload() 17 for filename in uploaded.keys(): 18 dst_path = os.path.join(upload_folder, filename)

    2 frames /usr/local/lib/python3.7/dist-packages/google/colab/_message.py in read_reply_from_input(message_id, timeout_sec) 104 reply.get('colab_msg_id') == message_id): 105 if 'error' in reply: --> 106 raise MessageError(reply['error']) 107 return reply.get('data', None) 108

    MessageError: TypeError: Cannot read property '_uploadFiles' of undefined

    opened by Ghee36 8
  • cannot import name 'AvgTimer' from 'basicsr.utils.logger

    cannot import name 'AvgTimer' from 'basicsr.utils.logger

    python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n RealESRGANv2-animevideo-xsx2 -s 2 - v -a --half --suffix outx2 Traceback (most recent call last): File "D:\python学习\Real-ESRGAN-master\Real-ESRGAN-master\inference_realesrgan_video.py", line 9, in from basicsr.utils.logger import AvgTimer ImportError: cannot import name 'AvgTimer' from 'basicsr.utils.logger' (D:\Python\lib\site-packages\basicsr\utils\logger.py)

    opened by wsysl1989 7
  • The usage of _dequeue_and_enqueue function in RealESRNetModel

    The usage of _dequeue_and_enqueue function in RealESRNetModel

    Hi, I read the code serveral times, but cannot figure out what the role of _dequeue_and_enqueue function in RealESRNetModel is. This function is only used in feed_data(), which just put self.lq and self.gt into self.queue_lq and self.queue_gt. But I cannot find some other codes to use self.queue_lq and self.queue_gt. I would appreciate it if someone could explain this.

    opened by YangGangZhiQi 7
  • 'VCOMP140D.DLL' is required

    'VCOMP140D.DLL' is required

    A straightforward execution of the 'Windows executable files' fails because 'VCOMP140D.DLL' is required.

    For me it was necessary to install 'Visual Studio 2019' and load the 'MSVC v142' package to solve this problem. Putting the 'VCOMP140D.DLL' into the 'Windows executable files' would help other users.

    The result look good, thanks for releasing.

    opened by ghost 7
  • ESRGAN massive CPU bottleneck

    ESRGAN massive CPU bottleneck

    Like I mentioned in this discussion: https://github.com/xinntao/Real-ESRGAN/discussions/269 ESRGAN is severely bottlenecked by my beastly CPU even though my GPU is trash. Ryzen 9 5900x: unknown GTX 1080: unknown My GPU is load is currently at 19% average while CPU is around 90-95%. It's literally telling me it's IDLING: unknown This is using the portable NCNN version with realesr-animevideov3 model at 4x, going from 628x480 to 2512x1920.

    opened by TechnoMasterBoy 6
  • Portable Executable error: Invalid bitcast %379 = bitcast i32 %378 to i16

    Portable Executable error: Invalid bitcast %379 = bitcast i32 %378 to i16

    I've installed on my linux host portable executable, tried to run it and received that error:

    quantum0@test-host:~/realesrgan$ ./realesrgan-ncnn-vulkan -i input.jpg -o output.png -n realesrgan-x4plus
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  queueC=0[1]  queueG=0[1]  queueT=0[1]
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  fp16-p/s/a=1/1/0  int8-p/s/a=1/1/0
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  subgroup=8  basic=1  vote=1  ballot=1  shuffle=0
    WARNING: lavapipe is not a conformant vulkan implementation, testing use only.
    Invalid bitcast
      %379 = bitcast i32 %378 to i16
    LLVM ERROR: Broken function
    Aborted
    

    What I'm doing wrong?

    Maybe this will be useful:

    quantum0@test-host:~/realesrgan$ uname -a
    Linux test-host 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    Btw it's not enough just to download executable and run, i had to install these dependencies:

    sudo apt-get install libvulkan-dev
    sudo apt-get install libgomp1
    
    opened by Quantum-0 6
  • 图片图块排版不正确,请修复

    图片图块排版不正确,请修复

    我使用windows的版本,程序目录下自带的input.jpg测试,使用的指令如下 D:\Real-ESRGAN-realesrgan-ncnn-vulkan-20211212-windows\realesrgan-ncnn-vulkan.exe -i input.jpg -o test.png -s 2 D:\Real-ESRGAN-realesrgan-ncnn-vulkan-20211212-windows\realesrgan-ncnn-vulkan.exe -i input.jpg -o test.png -s 1 分别测试了1倍数和2倍数,都是这样的排版,请修复

    test

    opened by cuihua0 6
  • realesr-general-wdn-x4v3如何训练的

    realesr-general-wdn-x4v3如何训练的

    你好,我看github上更新了模型realesr-general-x4v3,通过 -dn参数可以调节噪声抑制的水平。需要2个模型realesr-general-wdn-x4v3.pth和realesr-general-x4v3.pth。其中realesr-general-wdn-x4v3.pth应该是没有噪声抑制的模型,请问这个模型是怎么训练出来的呢?谢谢

    opened by buwangchuxin1992 0
  • Output file when running -s argument shows corruption of data

    Output file when running -s argument shows corruption of data

    I am running: Real-ESRGAN ncnn Vulkan and the -s argument only works for the default realesr-animevideov3 model and not for anything other that is provided

    all other models are stuck at 4x scale

    Steps to recreate:

    realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrgan-x4plus -s 2

    or

    realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrgan-x4plus-anime -s 2

    Output (for both): output

    opened by AlphaHasher 0
  • invalid bitcast error

    invalid bitcast error

    I get a very similar output to #262, even after installing the packages recommended in the comments on that issue:

    $ ./realesrgan-ncnn-vulkan -i input.jpg -o output.png -n realesr-animevideov3 -s 2
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  queueC=0[1]  queueG=0[1]  queueT=0[1]
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  fp16-p/s/a=1/1/0  int8-p/s/a=1/1/0
    [0 llvmpipe (LLVM 12.0.0, 256 bits)]  subgroup=8  basic=1  vote=1  ballot=1  shuffle=0
    WARNING: lavapipe is not a conformant vulkan implementation, testing use only.
    Invalid bitcast
      %379 = bitcast i32 %378 to i16
    LLVM ERROR: Broken function
    Aborted (core dumped)
    
    nvidia-smi
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 515.86.01    Driver Version: 515.86.01    CUDA Version: 11.7     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA GeForce ...  Off  | 00000000:02:00.0 Off |                  N/A |
    | 33%   27C    P8     9W / 190W |      0MiB /  8192MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    
    vulkaninfo
    ERROR: [Loader Message] Code 0 : /usr/lib/i386-linux-gnu/libvulkan_lvp.so: wrong ELF class: ELFCLASS32
    ERROR: [Loader Message] Code 0 : /usr/lib/i386-linux-gnu/libvulkan_radeon.so: wrong ELF class: ELFCLASS32
    ERROR: [Loader Message] Code 0 : /usr/lib/i386-linux-gnu/libvulkan_intel.so: wrong ELF class: ELFCLASS32
    'DISPLAY' environment variable not set... skipping surface info
    WARNING: lavapipe is not a conformant vulkan implementation, testing use only.
    ==========
    VULKANINFO
    ==========
    
    Vulkan Instance Version: 1.2.131
    
    
    Instance Extensions: count = 18
    ====================
    	VK_EXT_acquire_xlib_display            : extension revision 1
    	VK_EXT_debug_report                    : extension revision 10
    	VK_EXT_debug_utils                     : extension revision 1
    	VK_EXT_direct_mode_display             : extension revision 1
    	VK_EXT_display_surface_counter         : extension revision 1
    	VK_KHR_device_group_creation           : extension revision 1
    	VK_KHR_display                         : extension revision 23
    	VK_KHR_external_fence_capabilities     : extension revision 1
    	VK_KHR_external_memory_capabilities    : extension revision 1
    	VK_KHR_external_semaphore_capabilities : extension revision 1
    	VK_KHR_get_display_properties2         : extension revision 1
    	VK_KHR_get_physical_device_properties2 : extension revision 2
    	VK_KHR_get_surface_capabilities2       : extension revision 1
    	VK_KHR_surface                         : extension revision 25
    	VK_KHR_surface_protected_capabilities  : extension revision 1
    	VK_KHR_wayland_surface                 : extension revision 6
    	VK_KHR_xcb_surface                     : extension revision 6
    	VK_KHR_xlib_surface                    : extension revision 6
    
    Layers: count = 3
    =======
    VK_LAYER_LUNARG_standard_validation (LunarG Standard Validation Layer) Vulkan version 1.0.131, layer version 1:
    	Layer Extensions: count = 0
    	Devices: count = 1
    		GPU id 	: 0 (llvmpipe (LLVM 12.0.0, 256 bits))
    		Layer-Device Extensions: count = 0
    
    VK_LAYER_MESA_device_select (Linux device selection layer) Vulkan version 1.2.73, layer version 1:
    	Layer Extensions: count = 0
    	Devices: count = 1
    		GPU id 	: 0 (llvmpipe (LLVM 12.0.0, 256 bits))
    		Layer-Device Extensions: count = 0
    
    VK_LAYER_MESA_overlay (Mesa Overlay layer) Vulkan version 1.1.73, layer version 1:
    	Layer Extensions: count = 0
    	Devices: count = 1
    		GPU id 	: 0 (llvmpipe (LLVM 12.0.0, 256 bits))
    		Layer-Device Extensions: count = 0
    
    Presentable Surfaces:
    =====================
    
    Groups:
    =======
    	Device Group Properties (Group 0):
    		physicalDeviceCount: count = 1
    			llvmpipe (LLVM 12.0.0, 256 bits) (ID: 0)
    		subsetAllocation = 0
    
    	Device Group Present Capabilities (Group 0):
    WARNING: lavapipe is not a conformant vulkan implementation, testing use only.
    		llvmpipe (LLVM 12.0.0, 256 bits) (ID: 0)
    		Can present images from the following devices:
    			llvmpipe (LLVM 12.0.0, 256 bits) (ID: 0)
    		Present modes:
    			DEVICE_GROUP_PRESENT_MODE_LOCAL_BIT_KHR
    
    
    Device Properties and Extensions:
    =================================
    GPU0:
    VkPhysicalDeviceProperties:
    ---------------------------
    	apiVersion     = 4198582 (1.1.182)
    	driverVersion  = 1 (0x0001)
    	vendorID       = 0x10005
    	deviceID       = 0x0000
    	deviceType     = PHYSICAL_DEVICE_TYPE_CPU
    	deviceName     = llvmpipe (LLVM 12.0.0, 256 bits)
    
    VkPhysicalDeviceLimits:
    -----------------------
    	maxImageDimension1D                             = 16384
    	maxImageDimension2D                             = 16384
    	maxImageDimension3D                             = 4096
    	maxImageDimensionCube                           = 32768
    	maxImageArrayLayers                             = 2048
    	maxTexelBufferElements                          = 134217728
    	maxUniformBufferRange                           = 65536
    	maxStorageBufferRange                           = 134217728
    	maxPushConstantsSize                            = 128
    	maxMemoryAllocationCount                        = 4294967295
    	maxSamplerAllocationCount                       = 32768
    	bufferImageGranularity                          = 0x00000040
    	sparseAddressSpaceSize                          = 0x00000000
    	maxBoundDescriptorSets                          = 8
    	maxPerStageDescriptorSamplers                   = 32
    	maxPerStageDescriptorUniformBuffers             = 15
    	maxPerStageDescriptorStorageBuffers             = 16
    	maxPerStageDescriptorSampledImages              = 128
    	maxPerStageDescriptorStorageImages              = 16
    	maxPerStageDescriptorInputAttachments           = 8
    	maxPerStageResources                            = 128
    	maxDescriptorSetSamplers                        = 32768
    	maxDescriptorSetUniformBuffers                  = 256
    	maxDescriptorSetUniformBuffersDynamic           = 256
    	maxDescriptorSetStorageBuffers                  = 256
    	maxDescriptorSetStorageBuffersDynamic           = 256
    	maxDescriptorSetSampledImages                   = 256
    	maxDescriptorSetStorageImages                   = 256
    	maxDescriptorSetInputAttachments                = 256
    	maxVertexInputAttributes                        = 32
    	maxVertexInputBindings                          = 32
    	maxVertexInputAttributeOffset                   = 2047
    	maxVertexInputBindingStride                     = 2048
    	maxVertexOutputComponents                       = 128
    	maxTessellationGenerationLevel                  = 64
    	maxTessellationPatchSize                        = 32
    	maxTessellationControlPerVertexInputComponents  = 128
    	maxTessellationControlPerVertexOutputComponents = 128
    	maxTessellationControlPerPatchOutputComponents  = 128
    	maxTessellationControlTotalOutputComponents     = 4096
    	maxTessellationEvaluationInputComponents        = 128
    	maxTessellationEvaluationOutputComponents       = 128
    	maxGeometryShaderInvocations                    = 32
    	maxGeometryInputComponents                      = 64
    	maxGeometryOutputComponents                     = 128
    	maxGeometryOutputVertices                       = 1024
    	maxGeometryTotalOutputComponents                = 1024
    	maxFragmentInputComponents                      = 128
    	maxFragmentOutputAttachments                    = 8
    	maxFragmentDualSrcAttachments                   = 2
    	maxFragmentCombinedOutputResources              = 8
    	maxComputeSharedMemorySize                      = 32768
    	maxComputeWorkGroupCount: count = 3
    		65535
    		65535
    		65535
    	maxComputeWorkGroupInvocations                  = 1024
    	maxComputeWorkGroupSize: count = 3
    		1024
    		1024
    		1024
    	subPixelPrecisionBits                           = 8
    	subTexelPrecisionBits                           = 8
    	mipmapPrecisionBits                             = 8
    	maxDrawIndexedIndexValue                        = 4294967295
    	maxDrawIndirectCount                            = 4294967295
    	maxSamplerLodBias                               = 16
    	maxSamplerAnisotropy                            = 16
    	maxViewports                                    = 16
    	maxViewportDimensions: count = 2
    		16384
    		16384
    	viewportBoundsRange: count = 2
    		-32768
    		32768
    	viewportSubPixelBits                            = 0
    	minMemoryMapAlignment                           = 64
    	minTexelBufferOffsetAlignment                   = 0x00000010
    	minUniformBufferOffsetAlignment                 = 0x00000010
    	minStorageBufferOffsetAlignment                 = 0x00000010
    	minTexelOffset                                  = -32
    	maxTexelOffset                                  = 31
    	minTexelGatherOffset                            = -32
    	maxTexelGatherOffset                            = 31
    	minInterpolationOffset                          = -2
    	maxInterpolationOffset                          = 2
    	subPixelInterpolationOffsetBits                 = 8
    	maxFramebufferWidth                             = 16384
    	maxFramebufferHeight                            = 16384
    	maxFramebufferLayers                            = 2048
    	framebufferColorSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	framebufferDepthSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	framebufferStencilSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	framebufferNoAttachmentsSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	maxColorAttachments                             = 8
    	sampledImageColorSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	sampledImageIntegerSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	sampledImageDepthSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	sampledImageStencilSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	storageImageSampleCounts:
    		SAMPLE_COUNT_1_BIT
    		SAMPLE_COUNT_4_BIT
    	maxSampleMaskWords                              = 1
    	timestampComputeAndGraphics                     = true
    	timestampPeriod                                 = 1
    	maxClipDistances                                = 8
    	maxCullDistances                                = 8
    	maxCombinedClipAndCullDistances                 = 8
    	discreteQueuePriorities                         = 2
    	pointSizeRange: count = 2
    		0
    		255
    	lineWidthRange: count = 2
    		1
    		255
    	pointSizeGranularity                            = 0.125
    	lineWidthGranularity                            = 0.0078125
    	strictLines                                     = true
    	standardSampleLocations                         = true
    	optimalBufferCopyOffsetAlignment                = 0x00000080
    	optimalBufferCopyRowPitchAlignment              = 0x00000080
    	nonCoherentAtomSize                             = 0x00000040
    
    VkPhysicalDeviceSparseProperties:
    ---------------------------------
    	residencyStandard2DBlockShape            = false
    	residencyStandard2DMultisampleBlockShape = false
    	residencyStandard3DBlockShape            = false
    	residencyAlignedMipSize                  = false
    	residencyNonResidentStrict               = false
    
    VkPhysicalDeviceDriverPropertiesKHR:
    ------------------------------------
    	driverID           = UNKNOWN_VkDriverId
    	driverName         = llvmpipe
    	driverInfo         = Mesa 21.2.6 (LLVM 12.0.0)
    	conformanceVersion = 1.0.0.0
    
    VkPhysicalDeviceIDProperties:
    -----------------------------
    	deviceUUID      = 00000000-0000-0000-0000-000000000000
    	driverUUID      = 00000000-0000-0000-0000-000000000000
    	deviceNodeMask  = 0
    	deviceLUIDValid = false
    
    VkPhysicalDeviceLineRasterizationPropertiesEXT:
    -----------------------------------------------
    	lineSubPixelPrecisionBits = 8
    
    VkPhysicalDeviceMaintenance3Properties:
    ---------------------------------------
    	maxPerSetDescriptors    = 1024
    	maxMemoryAllocationSize = 0x80000000
    
    VkPhysicalDeviceMultiviewProperties:
    ------------------------------------
    	maxMultiviewViewCount     = 6
    	maxMultiviewInstanceIndex = 2147483647
    
    VkPhysicalDevicePointClippingProperties:
    ----------------------------------------
    	pointClippingBehavior = POINT_CLIPPING_BEHAVIOR_ALL_CLIP_PLANES
    
    VkPhysicalDeviceProtectedMemoryProperties:
    ------------------------------------------
    	protectedNoFault = false
    
    VkPhysicalDevicePushDescriptorPropertiesKHR:
    --------------------------------------------
    	maxPushDescriptors = 32
    
    VkPhysicalDeviceSamplerFilterMinmaxPropertiesEXT:
    -------------------------------------------------
    	filterMinmaxSingleComponentFormats = true
    	filterMinmaxImageComponentMapping  = true
    
    VkPhysicalDeviceSubgroupProperties:
    -----------------------------------
    	subgroupSize              = 8
    	supportedStages:
    		SHADER_STAGE_FRAGMENT_BIT
    		SHADER_STAGE_COMPUTE_BIT
    		SHADER_STAGE_ALL_GRAPHICS
    		SHADER_STAGE_ALL
    	supportedOperations:
    		SUBGROUP_FEATURE_BASIC_BIT
    		SUBGROUP_FEATURE_VOTE_BIT
    		SUBGROUP_FEATURE_ARITHMETIC_BIT
    		SUBGROUP_FEATURE_BALLOT_BIT
    	quadOperationsInAllStages = false
    
    VkPhysicalDeviceTransformFeedbackPropertiesEXT:
    -----------------------------------------------
    	maxTransformFeedbackStreams                = 4
    	maxTransformFeedbackBuffers                = 4
    	maxTransformFeedbackBufferSize             = 0xffffffff
    	maxTransformFeedbackStreamDataSize         = 512
    	maxTransformFeedbackBufferDataSize         = 512
    	maxTransformFeedbackBufferDataStride       = 512
    	transformFeedbackQueries                   = true
    	transformFeedbackStreamsLinesTriangles     = false
    	transformFeedbackRasterizationStreamSelect = false
    	transformFeedbackDraw                      = true
    
    VkPhysicalDeviceVertexAttributeDivisorPropertiesEXT:
    ----------------------------------------------------
    	maxVertexAttribDivisor = 4294967295
    
    
    Device Extensions: count = 54
    ------------------
    	VK_EXT_calibrated_timestamps          : extension revision 2
    	VK_EXT_conditional_rendering          : extension revision 2
    	VK_EXT_custom_border_color            : extension revision 12
    	VK_EXT_extended_dynamic_state         : extension revision 1
    	VK_EXT_extended_dynamic_state2        : extension revision 1
    	VK_EXT_host_query_reset               : extension revision 1
    	VK_EXT_index_type_uint8               : extension revision 1
    	VK_EXT_line_rasterization             : extension revision 1
    	VK_EXT_multi_draw                     : extension revision 1
    	VK_EXT_post_depth_coverage            : extension revision 1
    	VK_EXT_private_data                   : extension revision 1
    	VK_EXT_provoking_vertex               : extension revision 1
    	VK_EXT_sampler_filter_minmax          : extension revision 2
    	VK_EXT_scalar_block_layout            : extension revision 1
    	VK_EXT_separate_stencil_usage         : extension revision 1
    	VK_EXT_shader_stencil_export          : extension revision 1
    	VK_EXT_shader_viewport_index_layer    : extension revision 1
    	VK_EXT_transform_feedback             : extension revision 1
    	VK_EXT_vertex_attribute_divisor       : extension revision 3
    	VK_EXT_vertex_input_dynamic_state     : extension revision 2
    	VK_GOOGLE_decorate_string             : extension revision 1
    	VK_GOOGLE_hlsl_functionality1         : extension revision 1
    	VK_KHR_16bit_storage                  : extension revision 1
    	VK_KHR_8bit_storage                   : extension revision 1
    	VK_KHR_bind_memory2                   : extension revision 1
    	VK_KHR_buffer_device_address          : extension revision 1
    	VK_KHR_copy_commands2                 : extension revision 1
    	VK_KHR_create_renderpass2             : extension revision 1
    	VK_KHR_dedicated_allocation           : extension revision 3
    	VK_KHR_descriptor_update_template     : extension revision 1
    	VK_KHR_device_group                   : extension revision 4
    	VK_KHR_draw_indirect_count            : extension revision 1
    	VK_KHR_driver_properties              : extension revision 1
    	VK_KHR_external_fence                 : extension revision 1
    	VK_KHR_external_memory                : extension revision 1
    	VK_KHR_external_semaphore             : extension revision 1
    	VK_KHR_get_memory_requirements2       : extension revision 1
    	VK_KHR_image_format_list              : extension revision 1
    	VK_KHR_imageless_framebuffer          : extension revision 1
    	VK_KHR_incremental_present            : extension revision 2
    	VK_KHR_maintenance1                   : extension revision 2
    	VK_KHR_maintenance2                   : extension revision 1
    	VK_KHR_maintenance3                   : extension revision 1
    	VK_KHR_multiview                      : extension revision 1
    	VK_KHR_push_descriptor                : extension revision 2
    	VK_KHR_relaxed_block_layout           : extension revision 1
    	VK_KHR_sampler_mirror_clamp_to_edge   : extension revision 3
    	VK_KHR_separate_depth_stencil_layouts : extension revision 1
    	VK_KHR_shader_atomic_int64            : extension revision 1
    	VK_KHR_shader_draw_parameters         : extension revision 1
    	VK_KHR_storage_buffer_storage_class   : extension revision 1
    	VK_KHR_swapchain                      : extension revision 70
    	VK_KHR_uniform_buffer_standard_layout : extension revision 1
    	VK_KHR_variable_pointers              : extension revision 1
    
    VkQueueFamilyProperties:
    ========================
    	queueProperties[0]:
    	------------------
    		minImageTransferGranularity = (1,1,1)
    		queueCount                  = 1
    		queueFlags                  = QUEUE_GRAPHICS | QUEUE_COMPUTE | QUEUE_TRANSFER
    		timestampValidBits          = 64
    		present support = false
    
    VkPhysicalDeviceMemoryProperties:
    =================================
    memoryHeaps: count = 1
    	memoryHeaps[0]:
    		size   = 2147483648 (0x80000000) (2.00 GiB)
    		budget = 0
    		usage  = 0
    		flags:
    			MEMORY_HEAP_DEVICE_LOCAL_BIT
    memoryTypes: count = 1
    	memoryTypes[0]:
    		heapIndex     = 0
    		propertyFlags = 0x000f:
    			MEMORY_PROPERTY_DEVICE_LOCAL_BIT
    			MEMORY_PROPERTY_HOST_VISIBLE_BIT
    			MEMORY_PROPERTY_HOST_COHERENT_BIT
    			MEMORY_PROPERTY_HOST_CACHED_BIT
    		usable for:
    			IMAGE_TILING_OPTIMAL: color images, FORMAT_D16_UNORM, FORMAT_X8_D24_UNORM_PACK32, FORMAT_D32_SFLOAT, FORMAT_S8_UINT, FORMAT_D24_UNORM_S8_UINT, FORMAT_D32_SFLOAT_S8_UINT
    			IMAGE_TILING_LINEAR: color images
    
    VkPhysicalDeviceFeatures:
    =========================
    	robustBufferAccess                      = true
    	fullDrawIndexUint32                     = true
    	imageCubeArray                          = true
    	independentBlend                        = true
    	geometryShader                          = true
    	tessellationShader                      = true
    	sampleRateShading                       = true
    	dualSrcBlend                            = true
    	logicOp                                 = true
    	multiDrawIndirect                       = true
    	drawIndirectFirstInstance               = true
    	depthClamp                              = true
    	depthBiasClamp                          = true
    	fillModeNonSolid                        = true
    	depthBounds                             = false
    	wideLines                               = true
    	largePoints                             = true
    	alphaToOne                              = true
    	multiViewport                           = true
    	samplerAnisotropy                       = false
    	textureCompressionETC2                  = false
    	textureCompressionASTC_LDR              = false
    	textureCompressionBC                    = true
    	occlusionQueryPrecise                   = true
    	pipelineStatisticsQuery                 = true
    	vertexPipelineStoresAndAtomics          = true
    	fragmentStoresAndAtomics                = true
    	shaderTessellationAndGeometryPointSize  = true
    	shaderImageGatherExtended               = true
    	shaderStorageImageExtendedFormats       = true
    	shaderStorageImageMultisample           = true
    	shaderStorageImageReadWithoutFormat     = false
    	shaderStorageImageWriteWithoutFormat    = true
    	shaderUniformBufferArrayDynamicIndexing = false
    	shaderSampledImageArrayDynamicIndexing  = false
    	shaderStorageBufferArrayDynamicIndexing = false
    	shaderStorageImageArrayDynamicIndexing  = false
    	shaderClipDistance                      = true
    	shaderCullDistance                      = true
    	shaderFloat64                           = true
    	shaderInt64                             = true
    	shaderInt16                             = true
    	shaderResourceResidency                 = false
    	shaderResourceMinLod                    = false
    	sparseBinding                           = false
    	sparseResidencyBuffer                   = false
    	sparseResidencyImage2D                  = false
    	sparseResidencyImage3D                  = false
    	sparseResidency2Samples                 = false
    	sparseResidency4Samples                 = false
    	sparseResidency8Samples                 = false
    	sparseResidency16Samples                = false
    	sparseResidencyAliased                  = false
    	variableMultisampleRate                 = false
    	inheritedQueries                        = false
    
    VkPhysicalDevice16BitStorageFeatures:
    -------------------------------------
    	storageBuffer16BitAccess           = true
    	uniformAndStorageBuffer16BitAccess = true
    	storagePushConstant16              = true
    	storageInputOutput16               = false
    
    VkPhysicalDevice8BitStorageFeaturesKHR:
    ---------------------------------------
    	storageBuffer8BitAccess           = true
    	uniformAndStorageBuffer8BitAccess = true
    	storagePushConstant8              = true
    
    VkPhysicalDeviceBufferDeviceAddressFeaturesKHR:
    -----------------------------------------------
    	bufferDeviceAddress              = true
    	bufferDeviceAddressCaptureReplay = false
    	bufferDeviceAddressMultiDevice   = false
    
    VkPhysicalDeviceConditionalRenderingFeaturesEXT:
    ------------------------------------------------
    	conditionalRendering          = true
    	inheritedConditionalRendering = false
    
    VkPhysicalDeviceHostQueryResetFeaturesEXT:
    ------------------------------------------
    	hostQueryReset = true
    
    VkPhysicalDeviceImagelessFramebufferFeaturesKHR:
    ------------------------------------------------
    	imagelessFramebuffer = true
    
    VkPhysicalDeviceIndexTypeUint8FeaturesEXT:
    ------------------------------------------
    	indexTypeUint8 = true
    
    VkPhysicalDeviceLineRasterizationFeaturesEXT:
    ---------------------------------------------
    	rectangularLines         = true
    	bresenhamLines           = true
    	smoothLines              = true
    	stippledRectangularLines = true
    	stippledBresenhamLines   = true
    	stippledSmoothLines      = true
    
    VkPhysicalDeviceMultiviewFeatures:
    ----------------------------------
    	multiview                   = true
    	multiviewGeometryShader     = true
    	multiviewTessellationShader = true
    
    VkPhysicalDeviceProtectedMemoryFeatures:
    ----------------------------------------
    	protectedMemory = false
    
    VkPhysicalDeviceSamplerYcbcrConversionFeatures:
    -----------------------------------------------
    	samplerYcbcrConversion = false
    
    VkPhysicalDeviceScalarBlockLayoutFeaturesEXT:
    ---------------------------------------------
    	scalarBlockLayout = true
    
    VkPhysicalDeviceSeparateDepthStencilLayoutsFeaturesKHR:
    -------------------------------------------------------
    	separateDepthStencilLayouts = true
    
    VkPhysicalDeviceShaderAtomicInt64FeaturesKHR:
    ---------------------------------------------
    	shaderBufferInt64Atomics = true
    	shaderSharedInt64Atomics = true
    
    VkPhysicalDeviceShaderDrawParametersFeatures:
    ---------------------------------------------
    	shaderDrawParameters = true
    
    VkPhysicalDeviceTransformFeedbackFeaturesEXT:
    ---------------------------------------------
    	transformFeedback = true
    	geometryStreams   = true
    
    VkPhysicalDeviceUniformBufferStandardLayoutFeaturesKHR:
    -------------------------------------------------------
    	uniformBufferStandardLayout = true
    
    VkPhysicalDeviceVariablePointersFeatures:
    -----------------------------------------
    	variablePointersStorageBuffer = true
    	variablePointers              = false
    
    VkPhysicalDeviceVertexAttributeDivisorFeaturesEXT:
    --------------------------------------------------
    	vertexAttributeInstanceRateDivisor     = true
    	vertexAttributeInstanceRateZeroDivisor = false
    
    
    opened by kylrth 2
  • 如何转换成TensorRT文件 how to convert model file to tensorRT file?

    如何转换成TensorRT文件 how to convert model file to tensorRT file?

    我想转化成浮动尺寸推理,我按照如下步骤转换后,推理出的图像仍旧是固定尺寸。不知道问题出在哪里 1、修改难以转换成trt的网络结构。 对real_esrgan模型转换,修改/basicsr/archs/rrdbnet_arch.py。将注释部分改成下边。 #feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode='nearest'))) feat = self.lrelu(self.conv_up1(F.interpolate(feat, size=[int(2 * feat.shape[2]), int(2 * feat.shape[3])], mode='nearest'))) #feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest'))) feat = self.lrelu(self.conv_up2(F.interpolate(feat, size=[int(2 * feat.shape[2]), int(2 * feat.shape[3])], mode='nearest'))) 2、转换成onnx 通过torch.onnx.export的参数input_names 和 output_names为onnx模型的输入和输出指定名称。一方面是防止默认名称的变化导致dynamic_axes实参错误,另外一方面也为后续转换trt文件提供便利 torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True) 修改为 torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True, input_names=['input'], output_names=['output'], dynamic_axes={'input':[2, 3],'output':[2,3]}) 4、转成TensorRT文件 注意添加如下脚本 profile = builder.create_optimization_profile() if input_shape: profile.set_shape( # input tensor name input_name, *input_shape) config.add_optimization_profile(profile) 5、推理脚本,设置和获取shape部分 dynamic = False for i, binding in enumerate(self.engine): shape = self.engine.get_binding_shape(binding) # if input is dynamic if -1 in shape: dynamic = True self.context.active_optimization_profile = 0 if i < len(self.real_shapes): shape = self.real_shapes[i] self.context.set_binding_shape(i, shape) elif self.engine.binding_is_input(binding): raise ValueError(f'dynamic input must reset real shape') else: shape = self.context.get_binding_shape(i) logger.info('get output shape by context') elif dynamic: logger.info('output shape in trt file is fixed but input is dynamic') if i < len(self.real_shapes): shape = self.real_shapes[i] logger.info('reset output shape by real shapes') else: shape = self.context.get_binding_shape(i) logger.info('reset and get output shape by context')

    opened by zhu2bowen 0
  • 关于GPU的使用率上不去的问题 About the problem of GPU usage not increasing

    关于GPU的使用率上不去的问题 About the problem of GPU usage not increasing

    运行命令:python inference_realesrgan_video.py -i inputs/01.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 command:python inference_realesrgan_video.py -i inputs/01.mp4 -n realesr-animevideov3 -s 2 --suffix outx2

    即使加上--num_process_per_gpu参数后,运行速度依然非常缓慢 Even after adding the --num_process_per_gpu parameter, still very slow

    笔记本已经切换成独显直连模式 The laptop has been switched to the independent display direct connection mode Screenshot (285)

    opened by OOPPEENN 0
Releases(v0.3.0)
Owner
Xintao
Xintao
PyTorch implementation of a Real-ESRGAN model trained on custom dataset

Real-ESRGAN PyTorch implementation of a Real-ESRGAN model trained on custom dataset. This model shows better results on faces compared to the original

Sber AI 160 Jan 4, 2023
My usage of Real-ESRGAN to upscale anime, some test and results in the test_img folder

anime upscaler My usage of Real-ESRGAN to upscale anime, I hope to use this on a proper GPU cuz doing this on CPU is completely shit ?? , I even tried

Shangar Muhunthan 29 Jan 7, 2023
An official repository for Paper "Uformer: A General U-Shaped Transformer for Image Restoration".

Uformer: A General U-Shaped Transformer for Image Restoration Zhendong Wang, Xiaodong Cun, Jianmin Bao and Jianzhuang Liu Paper: https://arxiv.org/abs

Zhendong Wang 497 Dec 22, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models. Because improving the PSNR index is not compatible with subjective effects, we hope this part of work and our academic research are independent of each other.

hzwer 190 Jan 8, 2023
A toolkit for developing and comparing reinforcement learning algorithms.

Status: Maintenance (expect bug fixes and minor updates) OpenAI Gym OpenAI Gym is a toolkit for developing and comparing reinforcement learning algori

OpenAI 29.6k Jan 8, 2023
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
All the essential resources and template code needed to understand and practice data structures and algorithms in python with few small projects to demonstrate their practical application.

Data Structures and Algorithms Python INDEX 1. Resources - Books Data Structures - Reema Thareja competitiveCoding Big-O Cheat Sheet DAA Syllabus Inte

Shushrut Kumar 129 Dec 15, 2022
Multi-Stage Progressive Image Restoration

Multi-Stage Progressive Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Sh

Syed Waqas Zamir 859 Dec 22, 2022
(under submission) Bayesian Integration of a Generative Prior for Image Restoration

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration Authors: Majed El Helou, and Sabine Süsstrunk {Note: p

Majed El Helou 22 Dec 17, 2022
HINet: Half Instance Normalization Network for Image Restoration

HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

null 303 Dec 31, 2022
This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"

Learning Invariant Representation for Unsupervised Image Restoration (CVPR 2020) Introduction This is an implementation for the paper "Learning Invari

GarField 88 Nov 7, 2022
EDPN: Enhanced Deep Pyramid Network for Blurry Image Restoration

EDPN: Enhanced Deep Pyramid Network for Blurry Image Restoration Ruikang Xu, Zeyu Xiao, Jie Huang, Yueyi Zhang, Zhiwei Xiong. EDPN: Enhanced Deep Pyra

null 69 Dec 15, 2022
Image restoration with neural networks but without learning.

Warning! The optimization may not converge on some GPUs. We've personally experienced issues on Tesla V100 and P40 GPUs. When running the code, make s

Dmitry Ulyanov 7.4k Jan 1, 2023
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 8, 2023
Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR

Kai Zhang 2k Dec 31, 2022
Dynamic Attentive Graph Learning for Image Restoration, ICCV2021 [PyTorch Code]

Dynamic Attentive Graph Learning for Image Restoration This repository is for GATIR introduced in the following paper: Chong Mou, Jian Zhang, Zhuoyuan

Jian Zhang 84 Dec 9, 2022