Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

Overview

DeepCTR

Python Versions TensorFlow Versions Downloads PyPI Version GitHub Issues

Documentation Status CI status codecov Codacy Badge Disscussion License

DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can be used to easily build custom models.You can use any complex model with model.fit() ,and model.predict() .

  • Provide tf.keras.Model like interface for quick experiment . example
  • Provide tensorflow estimator interface for large scale data and distributed training . example
  • It is compatible with both tf 1.x and tf 2.x.

Some related projects:

Let's Get Started!(Chinese Introduction) and welcome to join us!

Models List

Model Paper
Convolutional Click Prediction Model [CIKM 2015]A Convolutional Click Prediction Model
Factorization-supported Neural Network [ECIR 2016]Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction
Product-based Neural Network [ICDM 2016]Product-based neural networks for user response prediction
Wide & Deep [DLRS 2016]Wide & Deep Learning for Recommender Systems
DeepFM [IJCAI 2017]DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Piece-wise Linear Model [arxiv 2017]Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
Deep & Cross Network [ADKDD 2017]Deep & Cross Network for Ad Click Predictions
Attentional Factorization Machine [IJCAI 2017]Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks
Neural Factorization Machine [SIGIR 2017]Neural Factorization Machines for Sparse Predictive Analytics
xDeepFM [KDD 2018]xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems
Deep Interest Network [KDD 2018]Deep Interest Network for Click-Through Rate Prediction
AutoInt [CIKM 2019]AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks
Deep Interest Evolution Network [AAAI 2019]Deep Interest Evolution Network for Click-Through Rate Prediction
FwFM [WWW 2018]Field-weighted Factorization Machines for Click-Through Rate Prediction in Display Advertising
ONN [arxiv 2019]Operation-aware Neural Networks for User Response Prediction
FGCNN [WWW 2019]Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction
Deep Session Interest Network [IJCAI 2019]Deep Session Interest Network for Click-Through Rate Prediction
FiBiNET [RecSys 2019]FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction
FLEN [arxiv 2019]FLEN: Leveraging Field for Scalable CTR Prediction
BST [DLP-KDD 2019]Behavior sequence transformer for e-commerce recommendation in Alibaba
IFM [IJCAI 2019]An Input-aware Factorization Machine for Sparse Prediction
DCN V2 [arxiv 2020]DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems
DIFM [IJCAI 2020]A Dual Input-aware Factorization Machine for CTR Prediction
FEFM and DeepFEFM [arxiv 2020]Field-Embedded Factorization Machines for Click-through rate prediction
SharedBottom [arxiv 2017]An Overview of Multi-Task Learning in Deep Neural Networks
ESMM [SIGIR 2018]Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate
MMOE [KDD 2018]Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts
PLE [RecSys 2020]Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations

Citation

If you find this code useful in your research, please cite it using the following BibTeX:

@misc{shen2017deepctr,
  author = {Weichen Shen},
  title = {DeepCTR: Easy-to-use,Modular and Extendible package of deep-learning based CTR models},
  year = {2017},
  publisher = {GitHub},
  journal = {GitHub Repository},
  howpublished = {\url{https://github.com/shenweichen/deepctr}},
}

DisscussionGroup

  • Discussions

  • 公众号:浅梦学习笔记

  • wechat ID: deepctrbot

    wechat

Main contributors(welcome to join us!)

pic
Shen Weichen

Alibaba Group

pic
Zan Shuxun

Alibaba Group

pic
Harshit Pande

Amazon

pic
Lai Mincai

ShanghaiTech University

pic
Li Zichao

Peking University

pic
Tan Tingyi

Chongqing University
of Posts and
Telecommunications

Comments
  • tf2.7版本 报错

    tf2.7版本 报错

    WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.linalg.matmul_1), but are not present in its tracked objects: <tf.Variable 'dense_1/kernel:0' shape=(64, 1) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer.

    AttributeError Traceback (most recent call last) in () 31 32 # 4.Define Model,train,predict and evaluate ---> 33 model = DeepFM(linear_feature_columns, dnn_feature_columns, task='regression') 34 model.compile("adam", "mse", metrics=['mse'], ) 35

    4 frames /usr/local/lib/python3.7/dist-packages/keras/engine/functional_utils.py in is_input_keras_tensor(tensor) 46 if not node_module.is_keras_tensor(tensor): 47 raise ValueError(_KERAS_TENSOR_TYPE_CHECK_ERROR_MSG.format(tensor)) ---> 48 return tensor.node.is_input 49 50

    AttributeError: 'KerasTensor' object has no attribute 'node' 是不是tensorflow版本的问题?

    bug to be solved question 
    opened by moseshu 16
  • 这是keras版本不对吗?

    这是keras版本不对吗?

    Traceback (most recent call last): File "E:/testings/DeepCTR-master/examples/run_classification_criteo.py", line 44, in model = DeepFM(linear_feature_columns, dnn_feature_columns, task='binary') File "E:\testings\DeepCTR-master\deepctr\models\deepfm.py", line 64, in DeepFM model = tf.keras.models.Model(inputs=inputs_list, outputs=output) File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\tensorflow\python\training\tracking\base.py", line 629, in _method_wrapper result = method(self, *args, **kwargs) File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\keras\engine\functional.py", line 144, in init for t in tf.nest.flatten(inputs)]): File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\keras\engine\functional.py", line 144, in for t in tf.nest.flatten(inputs)]): File "C:\Users\Lenovo\anaconda3\envs\python37\lib\site-packages\keras\engine\functional_utils.py", line 48, in is_input_keras_tensor return tensor.node.is_input AttributeError: 'KerasTensor' object has no attribute 'node'

    to be solved question 
    opened by Dimitri666 13
  • DIN模型样例代码run_din.py运行错误

    DIN模型样例代码run_din.py运行错误

    运行 https://github.com/shenweichen/DeepCTR/blob/master/examples/run_din.py报错 : ..../site-packages/deepctr/layers/sequence.py:198 call * outputs._uses_learning_phase = attention_score._uses_learning_phase AttributeError: 'Tensor' object has no attribute '_uses_learning_phase' 请问如何解决呢? Operating environment(运行环境):

    • python version 3.6
    • tensorflow version 1.4.0
    • deepctr version 0.5.2
    opened by waterbeach 12
  • Run the Examples Classification: Criteo, 报错

    Run the Examples Classification: Criteo, 报错

    运行官网的例子时,报错信息为: tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 104 is not in [0, 14) [[{{node sparse_emb_18-C14/embedding_lookup}} = ResourceGather[Tindices=DT_INT32, _class=["loc:@training/Adam/gradients/sparse_emb_18-C14/embedding_lookup_grad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](sparse_emb_18-C14/embeddings, linear_emb_18-C14/Cast)]]

    系统版本: tensorflow:1.11.0 keras:2.2.4 deepctr:0.2.2

    opened by somTian 12
  • New Feature: Modify the Hash layer to support the lookup table

    New Feature: Modify the Hash layer to support the lookup table

    New Feature:

    Modify the Hash layer to support the lookup feature.

    Now there are two hash techniques supported in the Hash layer:

    1. Lookup Table: when setup vocabulary_path, it can looks up input keys in a table and output the corresponding values. Missed keys are always return the default value, eg. 0.
    2. Bucket Hash: when vocabulary_path is not set, Hash will hash the input keys to [0,num_buckets). Parameter mask_zero can set True, which will set the hash value 0 when the input keys are 0 or 0.0, and other value will be hash in range [1,num_buckets).

    Initializing Hash with vocabulary_path CSV file which need follow the convention:the first column as keys and second column as values which are seperated by comma.

    The following is example snippet:

    * `1,emerson`
    * `2,lake`
    * `3,palmer`
    
    >>> hash = Hash(
    ...   num_buckets=3+1,
    ...   vocabulary_path=filename,
    ...   default_value=0)
    >>> hash(tf.constant('lake')).numpy()
    2
    >>> hash(tf.constant('lakeemerson')).numpy()
    0
    
    opened by dengc367 9
  • Add the CSV hash table in Hash layer and fix a bug.

    Add the CSV hash table in Hash layer and fix a bug.

    • delete the Lambda sublayer in LocalActivationUnit Layer class

    • add vocabulary_path in the SparseFeat to support the csv HashTable functionality

    • update docs and add examples in doc

    • Remove trailing whitespace

    opened by dengc367 9
  • can't concat when embedding_size is set to

    can't concat when embedding_size is set to "auto"

    Describe the bug(问题描述) When set the embedding size to "auto", the Concatenate layer can't merge all input Embedding with different size at axis=2

    def concat_fun(inputs, axis=-1): if len(inputs) == 1: return inputs[0] else: return Concatenate(axis=axis)(inputs)

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 1, 36), (None, 1, 30), (None, 1, 6), (None, 1, 12), (None, 1, 12), (None, 1, 30), (None, 1, 12)]

    Operating environment(运行环境):

    • python version [e.g. 3.4, 3.6]
    • tensorflow version [e.g. 1.4.0, 1.12.0]
    • deepctr version [e.g. 0.2.3,]

    Additional context Add any other context about the problem here.

    opened by dev-wei 8
  • 你这个模型在进行推理的时候性能很差,不知道是什么原因

    你这个模型在进行推理的时候性能很差,不知道是什么原因

    Describe the question(问题描述) 在使用deepfm模型进行推理时,单次推理达到300ms,这么高的延迟,不知道是我的操作原因,还是框架本身的问题,在给的例子中进行测试,也得到了相同的结论,如果有解决办法请告知一下,非常感谢!

    Additional context Add any other context about the problem here.

    Operating environment(运行环境):

    • python version [e.g. 3.6]
    • tensorflow version [e.g. 2.1.0,]
    • deepctr version [e.g. 0.7.4,]
    question 
    opened by rickyhuw 7
  • DIEN报错ValueError: The name

    DIEN报错ValueError: The name "seq_length" is used in 2 times in the model. All layer names should be unique.

    Describe the bug(问题描述) 运行run_dien.py时,报错ValueError: The name "seq_length" is used in 2 times in the model. All layer names should be unique.

    To Reproduce(复现步骤) Steps to reproduce the behavior:

    1. 运行examples/run_dien.py
    2. See error

    Operating environment(运行环境):

    • python version [3.7.4]
    • tensorflow version [2.0.0]
    • deepctr version [0.7.4]

    Additional context Add any other context about the problem here.

    bug help wanted 
    opened by MildAdam 7
  • 请问如何使用梯度截断功能呢?

    请问如何使用梯度截断功能呢?

    Describe the question(问题描述) 同样的数据,跑了不同的模型,在DeepFM模型上一切正常。 而使用NFFM模型的时候一直梯度爆炸,显示为Nan,请问如何处理这个问题,可以使用梯度阶段吗,该怎么使用呢?

    Additional context Add any other context about the problem here.

    Operating environment(运行环境):

    • python version 3.6
    • tensorflow version [e.g. 1.3.0,]
    • deepctr version [e.g. 0.4,]
    question 
    opened by zhuangjiayue 7
  • 使用DeepFMEstimator,GPU物理机 , 速度不增反降(每轮40分钟),训练数据大概35G

    使用DeepFMEstimator,GPU物理机 , 速度不增反降(每轮40分钟),训练数据大概35G

    train_df=pd.read_parquet("parquet_train_20201219_110000",engine='pyarrow') test_df=pd.read_parquet("parquet_test_20201220_110000",engine='pyarrow') df = pd.concat([train_df,test_df],axis=0)

    train_size=len(train_df) test_size=len(test_df)

    target = ['label'] dense_features = ["c1","c2","c3","c4","c5"] sparse_features = [x for x in df.columns if x not in dense_features+target]

    for feat in sparse_features: lbe = LabelEncoder() df[feat] = lbe.fit_transform(df[feat]) mms = MinMaxScaler(feature_range=(0, 1))

    df[sparse_features] = df[sparse_features].fillna('-1', ) df[dense_features] = mms.fit_transform(df[dense_features])

    dnn_feature_columns = [] linear_feature_columns = []

    for i, feat in enumerate(sparse_features): dnn_feature_columns.append(tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_identity(feat, df[feat].nunique()), 4)) linear_feature_columns.append(tf.feature_column.categorical_column_with_identity(feat, df[feat].nunique())) for feat in dense_features: dnn_feature_columns.append(tf.feature_column.numeric_column(feat)) linear_feature_columns.append(tf.feature_column.numeric_column(feat))

    train = df[0:train_size] test=df[train_size:]

    train_model_input = input_fn_pandas(train, sparse_features + dense_features, 'label', shuffle=True) test_model_input = input_fn_pandas(test, sparse_features + dense_features, None, shuffle=False)

    model = DeepFMEstimator(linear_feature_columns, dnn_feature_columns, task='binary')

    model.train(train_model_input) pred_ans_iter = model.predict(test_model_input) pred_ans = list(map(lambda x: x['pred'], pred_ans_iter)) print("test AUC", round(roc_auc_score(test[target].values, pred_ans), 4))

    还有其他的设置吗,设置多gpu跑,也不太快。GPU 利用率20%。

    question 
    opened by whk6688 6
  • dropout和batchnorm层已经传了training,outputs._uses_learning_phase = training is not None的作用是什么?

    dropout和batchnorm层已经传了training,outputs._uses_learning_phase = training is not None的作用是什么?

    Describe the question(问题描述) 既然DNN中的bn_layers和dropout_layers已经传入了training状态参数来区分是在训练还是在预测阶段,手动设置_uses_learning_phase的作用是什么? 我的tf版本2.10.0,应该是执行outputs._uses_learning_phase = training is not None https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/sequence.py#L266-L288

    在求attention_score的时候,不是已经把training状态传给DNN里的dropout和batchnorm层了吗? https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/sequence.py#L266

    https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/core.py#L104 https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/core.py#L198 https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/core.py#L205

    为什么还需要对_uses_learning_phase手动设置? https://github.com/shenweichen/DeepCTR/blob/e8f4d818f9b46608bc95bb60ef0bb0633606b2f2/deepctr/layers/sequence.py#L283-L286

    这里手动设置_uses_learning_phase是否发挥实际作用?去掉会怎么样?对于outputs这个tf.Tensor来说,好像是没有_uses_learning_phase这个类属性,这么写好像是临时新建个了_uses_learning_phase属性,然后赋值成xxx

    outputs._uses_learning_phase = xxx
    

    求大佬指点

    Additional context Add any other context about the problem here.

    Operating environment(运行环境):

    • python version 3.9.12
    • tensorflow version 2.10.0
    • deepctr version 0.9.3
    question 
    opened by Daemoonn 0
  • Why Linear layer mode 0 will keep dimension for the sparse feature while mode 2 will not?

    Why Linear layer mode 0 will keep dimension for the sparse feature while mode 2 will not?

    In deepctr tensorflow package, the output for Linear if only sparse features are presented would be the reduce_sum(sparse_input, axis=-1, keep_dims=True), but if there are both sparse and dense features, the output would be reduce_sum(sparse_input, axis=-1, keep_dims=False), what's the rationale for that? Thanks

    question 
    opened by fengyinyang 0
  • 自动驾驶更新笔记 Autopilot Updating Notes

    自动驾驶更新笔记 Autopilot Updating Notes

    您好,仓库内容很全面,非常受益, 可否引荐下本人的笔记,把我对自动驾驶的理解分享给大家,希望大家和我一起不断完善相关内容,谢谢您!

    Hello, the content of the repository is very comprehensive and very beneficial. Could you introduce my notes and share my understanding of autopilot with others? I hope you can continue to improve the relevant content with me, thank you!

    Autopilot-Updating-Notes

    enhancement&feature request 
    opened by nwaysir 0
  • Bump tensorflow from 2.6.2 to 2.9.3 in /docs

    Bump tensorflow from 2.6.2 to 2.9.3 in /docs

    Bumps tensorflow from 2.6.2 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • How to get multi output for DeepFM?

    How to get multi output for DeepFM?

    I'm trying to use DeepFM to predict scores for the world cup, which has 2 outputs the [left score, right score]. For the y values, the model takes in a (35245, 2) shape, and runs, but only gives one value for the output. Basically I'm asking how to set the amount of nodes in the output layer to 2 instead of 1.

    Y_train looks like this:

       home_score  away_score
             0.0         1.0
             1.0         1.0
             0.0         1.0
             1.0         2.0
    

    The model is:

     model = DeepFM(linear_feature_columns,dnn_feature_columns,task='regression')
     model.compile("adam", "mean_squared_error",
                   metrics=['mean_squared_error'], )
    history = model.fit(train_model_input, Y_train.values,
                        batch_size=256, epochs=10, verbose=2, validation_split=0.2, )
    pred_ans = model.predict(test_model_input, batch_size=256)
    
    

    Using google colab

    question 
    opened by jonathonbird 0
Releases(v0.9.3)
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 3, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 2.8k Feb 12, 2021
🔥 Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python

Cogitare is a Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python. A friendly interface for beginners and a powerful too

Cogitare - Modern and Easy Deep Learning with Python 76 Sep 30, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 67 Jan 4, 2023
An efficient and easy-to-use deep learning model compression framework

TinyNeuralNetwork 简体中文 TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework, which contains features like neura

Alibaba 441 Dec 25, 2022
GeDML is an easy-to-use generalized deep metric learning library

GeDML is an easy-to-use generalized deep metric learning library

Borui Zhang 32 Dec 5, 2022
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
A modular, research-friendly framework for high-performance and inference of sequence models at many scales

T5X T5X is a modular, composable, research-friendly framework for high-performance, configurable, self-service training, evaluation, and inference of

Google Research 1.1k Jan 8, 2023
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

null 342 Dec 2, 2022
TorchFlare is a simple, beginner-friendly, and easy-to-use PyTorch Framework train your models effortlessly.

TorchFlare TorchFlare is a simple, beginner-friendly and an easy-to-use PyTorch Framework train your models without much effort. It provides an almost

Atharva Phatak 85 Dec 26, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
This package contains deep learning models and related scripts for RoseTTAFold

RoseTTAFold This package contains deep learning models and related scripts to run RoseTTAFold This repository is the official implementation of RoseTT

null 1.6k Jan 3, 2023
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

null 10 Dec 14, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
A fast and easy to use, moddable, Python based Minecraft server!

PyMine PyMine - The fastest, easiest to use, Python-based Minecraft Server! Features Note: This list is not always up to date, and doesn't contain all

PyMine 144 Dec 30, 2022
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023