A cross platform OCR Library based on PaddleOCR & OnnxRuntime

Overview

RapidOCR (捷智OCR)

简体中文 | English

目录

简介

  • 完全开源免费并支持离线部署的多平台多语言OCR SDK

  • 中文广告: 欢迎加入我们的QQ群下载模型及测试程序,qq群号:887298230

  • 缘起:百度paddlepaddle工程化不是太好,为了方便大家在各种端上进行ocr推理,我们将它转换为onnx格式,使用python/c++/java/swift/c# 将它移植到各个平台。

  • 名称来源: 轻快好省并智能。 基于深度学习技术的OCR技术,主打人工智能优势及小模型,以速度为使命,效果为主导。

  • 基于百度的开源PaddleOCR 模型及训练,任何人可以使用本推理库,也可以根据自己的需求使用百度的paddlepaddle 框架进行模型优化。

近期更新

🍺 2021-06-20 update

  • 优化ocrweb中识别结果显示,同时添加识别动图演示
  • 更新datasets目录,添加一些常用数据库链接(搬运一下^-^)
  • 更新FAQ

2021-06-10 update

2021-06-08 update

  • 整理仓库,统一模型下载路径
  • 完善相关说明文档

2021-03-24 update

  • 新模型已经完全兼容ONNXRuntime 1.7 或更高版本。 特别感谢:@Channingss
  • 新版onnxruntime比1.6.0 性能提升40%以上。

整个框架

常见问题 FAQ

SDK 编译状态

鉴于ubuntu用户都是商业用户,也有编译能力,暂不提供预编译包使用,可自行编译。

平台 编译状态 提供状态
Windows x86/x64 CMake-windows-x86-x64 右侧下载
Linux x64 CMake-linux 暂不提供,自行编译

在线demo

  • Web demo
  • demo所用模型组合为: server det + mobile cls + mobile rec
  • 示例图:

项目结构

(点击展开)
RapidOCR
├── android             # 安卓工程目录
├── api4cpp             # c语言跨平台接口库源码目录,直接用根下的CMakelists.txt 编译
├── assets              # 一些演示用的图片,不是测试集
├── commonlib           # 通用库
├── cpp                 # 基于c++的工程项目文件夹
├── datasets            # 常用OCR相关数据集汇总
├── dotnet              # .Net程序目录
├── FAQ.md              # 一些问答整理
├── images              # 测试用图片,两张典型的测试图,一张是自然场景,另一个为长文本
├── include             # 编译c语言接口库时的头文件目录
├── ios                 # 苹果手机平台工程目录
├── jvm                 # 基于java的工程目录
├── lib                 # 编译用库文件目录,用于编译c语言接口库用,默认并不上传二进制文件
├── models              # 放置可使用的模型文件下载信息,基于百度网盘
├── ocrweb              # 基于python和Flask web
├── python              # python推理代码目录
├── release             #
└── tools               #  一些转换脚本之类

当前进展

  • C++范例(Windows/Linux/macOS): demo
  • Jvm范例(Java/Kotlin): demo
  • .Net范例(C#): demo
  • Android范例: demo
  • python范例: demo
  • IOS范例: 等待有缘人贡献代码
  • 依据python版本重写C++推理代码,以提升推理效果,并增加对gif/tga/webp 格式图片的支持

模型相关

  • 可以直接下载使用的模型 (下载链接:提取码:30jv

    ch_ppocr_mobile_v2.0_det_infer.onnx
    ch_ppocr_mobile_v2.0_cls_infer.onnx
    ch_ppocr_mobile_v2.0_rec_infer.onnx
    
    ch_ppocr_server_v2.0_det_infer.onnx
    ch_ppocr_server_v2.0_rec_infer.onnx
    
    japan_rec_crnn.onnx
    
  • 模型转换说明

原始发起者及初创作者

版权声明

  • 如果你的产品使用了本仓库中的全部或部分代码、文字或材料
  • 请注明出处并包括我们的github url: https://github.com/RapidOCR/RapidOCR

授权

  • OCR模型版权归百度所有,其它工程代码版权归本仓库所有者所有。
  • 本软件采用LGPL 授权方式,欢迎大家贡献代码,提交issue 甚至pr.

联系我们

  • 您可以通过QQ群联系到我们:887298230

  • 群号搜索不到时,请直接点此链接,找到组织

  • 用QQ扫描以下二维码:

示例图

C++/JVM示例图像

.Net示例图像

多语言示例图像

Comments
  • onnx转openvino出错

    onnx转openvino出错

    用的onnx模型是在您提供的网盘下载的ch_ppocr_mobile_v2.0_rec_infer.onnx 转换的命令是 python "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo.py" --input_model=ch_ppocr_mobile_v2.0_rec_infer.onnx --output_dir=. --model_name=model_rec --data_type=FP32 出现如下错误 image 请问这错误是啥意思?该怎么解决呢?

    opened by Dandelion111 13
  • Trouble with installation

    Trouble with installation

    pip install https://github.com/RapidAI/RapidOCR/raw/main/release/python_sdk/sdk_rapidocr_v1.0.0/rapidocr-1.0.0-py3-none-any.whl -i https://pypi.douban.com/simple/ Looking in indexes: https://pypi.douban.com/simple/ Collecting rapidocr==1.0.0 Using cached https://github.com/RapidAI/RapidOCR/raw/main/release/python_sdk/sdk_rapidocr_v1.0.0/rapidocr-1.0.0-py3-none-any.whl (18 kB) Collecting six>=1.15.0 Downloading https://pypi.doubanio.com/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: numpy>=1.19.3 in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from rapidocr==1.0.0) (1.21.4) Collecting pyclipper>=1.2.1 Downloading https://pypi.doubanio.com/packages/24/6e/b7b4d05383cb654560d63247ddeaf8b4847b69b68d8bc6c832cd7678dab1/pyclipper-1.3.0.zip (142 kB) |████████████████████████████████| 142 kB 2.7 MB/s Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: Shapely>=1.7.1 in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (from rapidocr==1.0.0) (1.8.0) ERROR: Could not find a version that satisfies the requirement onnxruntime>=1.7.0 (from rapidocr) (from versions: none) ERROR: No matching distribution found for onnxruntime>=1.7.0

    But I do have onnxruntime on my Mac.

    🍺 /opt/homebrew/Cellar/onnxruntime/1.9.1: 77 files, 11.9MB

    opened by sxflynn 9
  • 如果仅仅使用onnxruntime-1.7.0-shared.7z和 opencv-3.4.13-sharedLib.7z,cmake编译问题

    如果仅仅使用onnxruntime-1.7.0-shared.7z和 opencv-3.4.13-sharedLib.7z,cmake编译问题

    操作系统WIN10 x64 语言C++ 编译RapidOCR 如果仅仅使用onnxruntime-1.7.0-shared.7z和 opencv-3.4.13-sharedLib.7z, 直接运行build.bat会报错,提示cmake找不到onnxruntime,所以只能使用onnxruntime-1.6.0-sharedLib.7z,

    opened by Liudyan 8
  • E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running ScatterND node. Name:'ScatterND@1' Status Me

    E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running ScatterND node. Name:'ScatterND@1' Status Me

    环境: windows 工具: Anaconda3-2020.11-Windows-x86_64 在Anaconda里面: conda create -n base37 python=3.7 然后:在base37里面安装了 requirements.txt 然后,windows下面使用 base37下面的执行 rapidOCR.py

    报错: C:\ProgramData\Anaconda3\python.exe E:/comm_Item/Item_doing/ocr_recog_py/RapidOCR/python/rapidOCR.py dt_boxes num : 17, elapse : 0.11702466011047363 cls num : 17, elapse : 0.016003131866455078 2021-06-06 17:06:33.2157753 [E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running ScatterND node. Name:'ScatterND@1' Status Message: updates tensor should have shape equal to indices.shape[:-1] + data.shape[indices.shape[-1]:]. updates shape: {1}, indices shape: {1}, data shape: {3} Traceback (most recent call last): File "E:/comm_Item/Item_doing/ocr_recog_py/RapidOCR/python/rapidOCR.py", line 271, in dt_boxes, rec_res = text_sys(args.image_path) File "E:/comm_Item/Item_doing/ocr_recog_py/RapidOCR/python/rapidOCR.py", line 195, in call rec_res, elapse = self.text_recognizer(img_crop_list) File "E:\comm_Item\Item_doing\ocr_recog_py\RapidOCR\python\ch_ppocr_mobile_v2_rec\text_recognize.py", line 115, in call preds = self.session.run(None, onnx_inputs)[0] File "C:\ProgramData\Anaconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 188, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running ScatterND node. Name:'ScatterND@1' Status Message: updates tensor should have shape equal to indices.shape[:-1] + data.shape[indices.shape[-1]:]. updates shape: {1}, indices shape: {1}, data shape: {3}

    Process finished with exit code 1

    opened by xinsuinizhuan 8
  • No results from inference when using onnxruntime with TensorRT

    No results from inference when using onnxruntime with TensorRT

    I built onnxruntime with TensorRT to see if there could be any performance improvements with RapidOCR but unfortunately, the inference returned an empty array. Here's the log:

    C:\Users\samay\Documents\RapidOCR\python>python rapidOCR.py
    2021-07-08 00:09:15.7074765 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2021-07-08 05:09:15   ERROR] Parameter check failed at: engine.cpp::nvinfer1::rt::ExecutionContext::setBindingDimensions::1136, condition: profileMaxDims.d[i] >= dimensions.d[i]
    Traceback (most recent call last):
      File "C:\Users\samay\Documents\RapidOCR\python\rapidOCR.py", line 257, in <module>
        dt_boxes, rec_res = text_sys(args.image_path)
      File "C:\Users\samay\Documents\RapidOCR\python\rapidOCR.py", line 177, in __call__
        dt_boxes, elapse = self.text_detector(img)
      File "C:\Users\samay\Documents\RapidOCR\python\ch_ppocr_mobile_v2_det\text_detect.py", line 136, in __call__
        dt_boxes = post_result[0]['points']
    IndexError: list index out of range
    

    I'm by no means an expert in model conversion so I'm guessing tensorrt simply doesn't support the converted onnx model ? Is there a way to make it work ?

    opened by samayala22 7
  • python+onnx+onnxRuntime推理时间疑问

    python+onnx+onnxRuntime推理时间疑问

    您好,我在测试的时候,发现python+onnx+onnxRuntime的推理速度慢于python+paddle+mkl的时间,想问下是我某些设置没有开启嘛?我将两个代码的预处理参数统一了。 我的cpu是Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz。

    opened by Gmgge 4
  • 关于API方式返回结果的一点建议

    关于API方式返回结果的一点建议

    从文档里看,目前的返回结果是这样的:

    [['0', '香港深圳抽血', '0.93583983'], ['1', '专业查性别', '0.89865875'], ['2', '专业鉴定B超单', '0.9955703'], ['3', 'b超仪器查性别', '0.99489486'], ['4', '加微信eee', '0.99073666'], ['5', '可邮寄', '0.99923944']]
    

    坐标其实还是比较重要的,可以用在后期对内容进行分段处理。

    下面的数据格式是我在项目里做的一个api结果的例子。

    lines:提供了一个合并后的文本结果。方便直接显示使用。 regions:识别到的区域,每个区域的文字和坐标。

    代码在这里 https://github.com/cuiliang/RapidOCR/blob/Quicker/ocrweb/api_task.py
    不太懂py,代码只是跟随感觉拼凑的😂,供大佬参考。

    
    {
      "result": {
        "lines": "Filters Is:issueis:open\n\n10pen 35Closed\n\n建议:将ppocr_keys等信息直接存储到onnx模型\n#42 opened 4daysagoby AutumnSun1996",
        "regions": [
          {
            "text": "Filters",
            "confidence": 0.9966548,
            "rect": {
              "left": 57,
              "top": 0,
              "right": 116,
              "bottom": 2
            }
          },
          {
            "text": "Is:issueis:open",
            "confidence": 0.84303313,
            "rect": {
              "left": 210,
              "top": 2,
              "right": 347,
              "bottom": 3
            }
          },
          {
            "text": "10pen",
            "confidence": 0.976416,
            "rect": {
              "left": 89,
              "top": 88,
              "right": 160,
              "bottom": 88
            }
          },
          {
            "text": "35Closed",
            "confidence": 0.9819431,
            "rect": {
              "left": 213,
              "top": 89,
              "right": 305,
              "bottom": 89
            }
          },
          {
            "text": "建议:将ppocr_keys等信息直接存储到onnx模型",
            "confidence": 0.97398514,
            "rect": {
              "left": 90,
              "top": 158,
              "right": 594,
              "bottom": 158
            }
          },
          {
            "text": "#42 opened 4daysagoby AutumnSun1996",
            "confidence": 0.9657532,
            "rect": {
              "left": 91,
              "top": 199,
              "right": 442,
              "bottom": 198
            }
          }
        ]
      },
      "info": {
        "total_elapse": 0.45919999999999994,
        "elapse_part": {
          "det_elapse": "0.3858",
          "cls_elapse": "0.0011",
          "rec_elapse": "0.0723"
        }
      }
    }
    
    enhancement 
    opened by cuiliang 3
  • 建议: 将ppocr_keys等信息直接存储到onnx模型

    建议: 将ppocr_keys等信息直接存储到onnx模型

    建议将ppocr_keys, rec_img_shape等信息直接存储到onnx模型

    目前, ppocr_keys是单独存放在txt文件, 然后在config.yaml中配置文件路径; rec_img_shape是在config.yaml中配置 这两个参数是和onnx模型强相关的, 可以直接作为元数据存储到onnx模型内, 减少配置的需求. 尤其是ppocr_keys, 目前通过另一个文件来分发, 容易出现两边不一致的情况. ONNX本身支持自定义元信息的存储. 使用这种方式, 部署相关的配置应该会更简单.

    参考代码:

    # 添加meta信息
    import onnx
    
    model = onnx.load_model('/path/to/model.onnx')
    meta = model.metadata_props.add()
    meta.key = 'dictionary'
    meta.value = open('/path/to/ppocr_keys_v1.txt', 'r', -1, 'u8').read()
    
    meta = model.metadata_props.add()
    meta.key = 'shape'
    meta.value = '[3,48,320]'
    
    onnx.save_model(model, '/path/to/model.onnx')
    
    # 获取meta信息
    import json
    import onnxruntime as ort
    
    sess = ort.InferenceSession('/path/to/model.onnx')
    metamap = sess.get_modelmeta().custom_metadata_map
    chars = metamap['dictionary'].splitlines()
    input_shape = json.loads(metamap['shape'])
    
    opened by AutumnSun1996 3
  • 检测识别文本中是否包含常见违规词汇

    检测识别文本中是否包含常见违规词汇

    转换公开的base64格式的违规词汇数据(https://gitee.com/xstudio/badwords/tree/master)为utf-8格式,并采用AC自动机对识别文本进行违规词汇检测。同时,将先前js / css / url注入的安全检测功能合并到一个类中(detection.py / class Detection()),在task.py中设置函数detection对上述安全检测功能选择性使用(默认值全为true,即全部使用)

    opened by innerVoi 2
  • Can someone share demo tool (cpp)?

    Can someone share demo tool (cpp)?

    Hi,

    I want to use .net tool to compare models, but I couldn't download the files, because of they shared on QQ. I couldn't register it. Can someone share this file with me, please? Via googledrive, wetransfer, telegram etc. Thanks for all.

    Links:

    https://github.com/RapidAI/RapidOCR/blob/main/docs/README_en.md#demo https://github.com/RapidAI/RapidOCR/tree/main/cpp#demo%E4%B8%8B%E8%BD%BDwinmaclinux

    enhancement 
    opened by yeu-github 2
  • 参考OCRWeb实现的多语言部署

    参考OCRWeb实现的多语言部署

    参考OCRWeb实现的多语言部署

    • 同时支持多种语言,可通过接口参数配置语言及预测过程中的其他参数
    • 调整结果展示方式为基于canvas, 减少后端处理和接口数据传输
    • 预测接口添加Token验证支持
    • 添加pyinstaller打包脚本,简化安装步骤

    示例打包结果: https://github.com/AutumnSun1996/RapidOCR/releases/tag/v1.1.1-ocrweb-multi

    opened by AutumnSun1996 2
  • Loading .onnx models by opencv

    Loading .onnx models by opencv

    Discussed in https://github.com/RapidAI/RapidOCR/discussions/58

    Originally posted by senstar-hsoleimani December 6, 2022 I downloaded the onnx models provided in GoogleDrive , but I could not read them by OpenCv {cv::dnn::ReadNet()}. Can anyone help please?

    opened by SWHL 1
Releases(v1.1.0)
  • v1.1.0(Aug 17, 2022)

    本次更新要点:

    1. 文本识别部分所需要的字典文件写入到ONNX模型中,这样分发时只有模型文件,避免模型与字典文件不统一的问题,感谢AutumnSun1996issue 42提出。
    2. 当前推理代码是兼容v1.0.0的,也就是说传入字典文件,会优先加载传入的字典文件。
    3. 替换之前可视化的字体(msyh.ttc)为方正姚体(FZYTK.TTF),主要原因是,后者字体文件更小,便于下载。
    4. 统一各个模块的名称,避免引起困惑。
    5. ocrweb部分添加API部署调用方式,详情参见以API方式运行和调用
    6. 添加检测后处理参数score_mode=slow 增加识别率 by @DogeVenci in https://github.com/RapidAI/RapidOCR/pull/37

    注意!!!

    • 如果下载附件速度较慢,可以去Gitee下载,文件是一样的。

    附件各个文件目录结构:

    • ocrweb_v1.1.0.zip

      ocrweb_v1.1.0/
      ├── api.py
      ├── config.yaml
      ├── main.py
      ├── rapidocr_onnxruntime
      │   ├── ch_ppocr_v2_cls
      │   ├── ch_ppocr_v3_det
      │   ├── ch_ppocr_v3_rec
      │   ├── __init__.py
      │   └── rapid_ocr_api.py
      ├── README.md
      ├── requirements.txt
      ├── resources
      │   └── models
      │       ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
      │       ├── ch_PP-OCRv3_det_infer.onnx
      │       └── ch_PP-OCRv3_rec_infer.onnx
      ├── static
      │   ├── css
      │   └── js
      ├── task.py
      └── templates
          └── index.html
      
    • rapidocr_onnxruntime_v1.1.0.zip

      rapidocr_onnxruntime_v1.1.0/
      ├── config.yaml
      ├── rapidocr_onnxruntime
      │   ├── ch_ppocr_v2_cls
      │   ├── ch_ppocr_v3_det
      │   ├── ch_ppocr_v3_rec
      │   ├── __init__.py
      │   └── rapid_ocr_api.py
      ├── README.md
      ├── requirements.txt
      ├── resources
      │   ├── fonts
      │   │    └── FZYTK.TTF
      │   └── models
      │       ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
      │       ├── ch_PP-OCRv3_det_infer.onnx
      │       └── ch_PP-OCRv3_rec_infer.onnx
      ├── setup.py
      ├── test_demo.py
      └── test_images
          ├── ch_en_num.jpg
          └── single_line_text.jpg
      
    • rapidocr_openvino_v1.1.0.zip

      rapidocr_openvino_v1.1.0/
      ├── config.yaml
      ├── rapidocr_openvino
      │   ├── ch_ppocr_v2_cls
      │   ├── ch_ppocr_v3_det
      │   ├── ch_ppocr_v3_rec
      │   ├── __init__.py
      │   ├── rapid_ocr_api.py
      │   └── README.md
      ├── README.md
      ├── requirements.txt
      ├── resources
      │   ├── fonts
      │   │    └── FZYTK.TTF
      │   └── models
      │       ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
      │       ├── ch_PP-OCRv3_det_infer.onnx
      │       └── ch_PP-OCRv3_rec_infer.onnx
      ├── setup.py
      ├── test_demo.py
      └── test_images
          ├── ch_en_num.jpg
          └── single_line_text.jpg
      
    • required_for_whl_v1.1.0.zip

      required_for_whl_v1.1.0/
      ├── config.yaml
      ├── README.md
      ├── resources
      │   └── models
      │       ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
      │       ├── ch_PP-OCRv3_det_infer.onnx
      │       └── ch_PP-OCRv3_rec_infer.onnx
      ├── test_demo.py
      └── test_images
          ├── ch_en_num.jpg
          └── single_line_text.jpg
      
    • resources.zip

      resources/
      ├── fonts
      │   └── FZYTK.TTF
      └── models
          ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
          ├── ch_PP-OCRv3_det_infer.onnx
          └── ch_PP-OCRv3_rec_infer.onnx
      
    Source code(tar.gz)
    Source code(zip)
    FZYTK.TTF(3.09 MB)
    ocrweb_v1.1.0.zip(11.86 MB)
    rapidocr_onnxruntime_v1.1.0.zip(13.43 MB)
    rapidocr_openvino_v1.1.0.zip(13.44 MB)
    rapid_layout_models.zip(6.50 MB)
    rapid_table_models.zip(13.54 MB)
    required_for_whl_v1.1.0.zip(11.78 MB)
    resources.zip(13.30 MB)
  • v1.0.0(Jul 9, 2022)

    v1.0.0

    • Considering that it is tedious to directly use the inference code under the existing repository, we hereby release a version that packages the resources directory with each other inference scenario independently.
    • The contents of the resources directory are shown below, and the models are the same as those in the download link given in the repository, which is the current The best model combination.
      resources
      ├── fonts
      │   └── msyh.ttc
      ├── models
      │   ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
      │   ├── ch_PP-OCRv3_det_infer.onnx
      │   └── ch_PP-OCRv3_rec_infer.onnx
      └── rec_dict
          └── ppocr_keys_v1.txt
      
    • You can go to BaiduNetDisk | Google Drive to download other models according to your needs to download other models.
    • The following attached zip file contains the complete runtime code and model, which can be downloaded directly and refer to the README to run the sample demo.

    中文版:

    • 考虑到直接使用现有仓库下的推理代码步骤繁琐,特此release一版,将resources目录与其他各个推理场景独立打包。
    • 其中,resources目录下内容如下所示,模型与仓库中所给下载链接中的一致,为目前最优模型组合。
      resources
      ├── fonts
      │   └── msyh.ttc
      ├── models
      │   ├── ch_ppocr_mobile_v2.0_cls_infer.onnx
      │   ├── ch_PP-OCRv3_det_infer.onnx
      │   └── ch_PP-OCRv3_rec_infer.onnx
      └── rec_dict
          └── ppocr_keys_v1.txt
      
    • 小伙伴可根据需要,自行去 百度网盘 | Google Drive下载其他模型。
    • 以下附件中的zip文件中含有完整的运行代码和模型,可直接下载,参照README来运行示例Demo。
    • !!!如果下载网慢的话,可以去Gitee下载。
    Source code(tar.gz)
    Source code(zip)
    ocrweb_v1.0.0.zip(11.78 MB)
    rapidocr_onnxruntime_v1.0.0.zip(23.61 MB)
    rapidocr_openvino_v1.0.0.zip(23.61 MB)
    required_for_whl_v1.0.0.zip(11.77 MB)
    resources.zip(23.39 MB)
  • V1.0(Mar 27, 2021)

Owner
RapidOCR Team
An open source team for development of OCR and others.
RapidOCR Team
Backend for the Autocomplete platform. An AI assisted coding platform.

Introduction A custom predictor allows you to deploy your own prediction implementation, useful when the existing serving implementations don't fit yo

Tatenda Christopher Chinyamakobvu 1 Jan 31, 2022
Generate text line images for training deep learning OCR model (e.g. CRNN)

Generate text line images for training deep learning OCR model (e.g. CRNN)

null 532 Jan 6, 2023
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.5k Dec 26, 2022
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.4k Feb 17, 2021
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)

CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for different domains.

Zihan Liu 89 Nov 10, 2022
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 3, 2023
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

null 44 Jan 6, 2023
Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources (NAACL-2021).

Unifying Cross-Lingual Semantic Role Labeling with Heterogeneous Linguistic Resources Description This is the repository for the paper Unifying Cross-

Sapienza NLP group 16 Sep 9, 2022
PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer

Cross-Covariance Image Transformer (XCiT) PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer L

Facebook Research 605 Jan 2, 2023
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
SASE : Self-Adaptive noise distribution network for Speech Enhancement with heterogeneous data of Cross-Silo Federated learning

SASE : Self-Adaptive noise distribution network for Speech Enhancement with heterogeneous data of Cross-Silo Federated learning We propose a SASE mode

Tower 1 Nov 20, 2021
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022
Fast topic modeling platform

The state-of-the-art platform for topic modeling. Full Documentation User Mailing List Download Releases User survey What is BigARTM? BigARTM is a pow

BigARTM 633 Dec 21, 2022
PORORO: Platform Of neuRal mOdels for natuRal language prOcessing

PORORO: Platform Of neuRal mOdels for natuRal language prOcessing pororo performs Natural Language Processing and Speech-related tasks. It is easy to

Kakao Brain 1.2k Dec 21, 2022
sangha, pronounced "suhng-guh", is a social networking, booking platform where students and teachers can share their practice.

Flask React Project This is the backend for the Flask React project. Getting started Clone this repository (only this branch) git clone https://github

Courtney Newcomer 17 Sep 29, 2021
TextFlint is a multilingual robustness evaluation platform for natural language processing tasks,

TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-population, and their combinations to provide a comprehensive robustness analysis.

TextFlint 587 Dec 20, 2022
PRAnCER is a web platform that enables the rapid annotation of medical terms within clinical notes.

PRAnCER (Platform enabling Rapid Annotation for Clinical Entity Recognition) is a web platform that enables the rapid annotation of medical terms within clinical notes. A user can highlight spans of text and quickly map them to concepts in large vocabularies within a single, intuitive platform.

Sontag Lab 39 Nov 14, 2022
Universal End2End Training Platform, including pre-training, classification tasks, machine translation, and etc.

背景 安装教程 快速上手 (一)预训练模型 (二)机器翻译 (三)文本分类 TenTrans 进阶 1. 多语言机器翻译 2. 跨语言预训练 背景 TrenTrans是一个统一的端到端的多语言多任务预训练平台,支持多种预训练方式,以及序列生成和自然语言理解任务。 安装教程 git clone git

Tencent Minority-Mandarin Translation Team 42 Dec 20, 2022