Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch

Comments
  • AttributeError: 'dict' object has no attribute 'dataset'

    AttributeError: 'dict' object has no attribute 'dataset'

    I am getting this error while writing the following class code. The error message is attached below the code.

    class GPReviewDataset(data.dataset):

    def init(self, review, target, tokenizer, max_len): self.review = review self.target = target self.tokenizer = tokenizer self.max_len = max_len

    def len(self): # return the number of reviews we have return len(self.reviews)

    def getitem(self, item): # takes the index and the reviews review = str(self.reviews[item])

    encoding = self.tokenizer.encode_plus(
      review,
      add_special_tokens=True,
      max_length=self.max_len,
      return_token_type_ids=False,
      pad_to_max_length=True,
      return_attention_mask=True,
      return_tensors='pt'
    )
    
    return {
      'review_text': review,
      'input_ids': encoding['input_ids'].flatten(),
      'attention_mask': encoding['attention_mask'].flatten(),
      'targets': torch.tensor(self.target, dtype=torch.long)
    }
    

    image

    opened by ulfat191 1
  • 04.first-neural-network.ipynb error

    04.first-neural-network.ipynb error

    I am getting the following error for 04.first-neural-network.ipynb notebook:

    --------------------------------------------------
    RuntimeError     Traceback (most recent call last)
    <ipython-input-28-8756d5fbee9e> in <module>
         10 
         11     if epoch % 100 == 0:
    ---> 12       train_acc = calculate_accuracy(y_train, y_pred)
         13 
         14       y_test_pred = net(X_test)
    
    <ipython-input-24-51ba3ab94870> in calculate_accuracy(y_true, y_pred)
          1 def calculate_accuracy(y_true, y_pred):
          2   predicted = y_pred.ge(.5).view(-1)
    ----> 3   return (y_true == predicted).sum().float() / len(y_true)
    
    RuntimeError: Expected object of scalar type Float but got scalar type Byte for argument #2 'other'
    
    opened by caxelrud 1
  • Problem printing the shape of last_hidden_state and pooled_output

    Problem printing the shape of last_hidden_state and pooled_output

    Change this line -> bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME) to bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False)

    as a result the dict will not return str rather will return tensor.

    It will help you with other parts of code as well.

    opened by ulfat191 0
  • How to Export 'ONNX' Model?

    How to Export 'ONNX' Model?

    Hi. Thanks to this code, I was able to write a multi-label classification model well. By the way, can you tell me how to export the model made like this using torch.onnx? An error occurred when I used the normal torch.onnx.export method.

    My code :

    test_comment="hello"  
    encoding = tokenizer.encode_plus(
        test_comment,
        add_special_tokens=True,
        max_length=63,
        return_token_type_ids=False,
        padding="max_length",
        return_attention_mask=True,
        return_tensors='pt',
      )
    
    
    
    torch.onnx.export(trained_model,
                        (encoding["input_ids"], encoding["attention_mask"]),
                        'model.onnx',
                        export_params=True,
                        do_constant_folding=True,
                        opset_version=11,
                        input_names=['input_ids', 'attention_mask'],
                        output_names=['output'],
    )
    

    error :

    
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-49-9c2e1e064898> in <module>
    ----> 1 torch.onnx.export(trained_model,
          2                     (encoding["input_ids"], encoding["attention_mask"]),
          3                     'model.onnx',
          4                     export_params=True,
          5                     do_constant_folding=True,
    
    ~/anaconda3/envs/myenv1/lib/python3.8/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
        273 
        274     from torch.onnx import utils
    --> 275     return utils.export(model, args, f, export_params, verbose, training,
        276                         input_names, output_names, aten, export_raw_ir,
        277                         operator_export_type, opset_version, _retain_param_name,
    
    ~/anaconda3/envs/myenv1/lib/python3.8/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
         86         else:
         87             operator_export_type = OperatorExportTypes.ONNX
    ---> 88     _export(model, args, f, export_params, verbose, training, input_names, output_names,
         89             operator_export_type=operator_export_type, opset_version=opset_version,
         90             _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
    
    ~/anaconda3/envs/myenv1/lib/python3.8/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference)
        687 
    ...
        128             wrapper,
        129             in_vars + module_state,
    
    RuntimeError: output 1 (0
    [ CPULongType{} ]) of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.
    

    Thank you in advance for your reply.

    opened by fspanda 0
  • Error: ModelCheckpoint(monitor='val_loss') not found in the returned metrics

    Error: ModelCheckpoint(monitor='val_loss') not found in the returned metrics

    Hi, When running the notebook 11.multi-label-text-classification-with-bert.ipynb I encounter the following error... pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='val_loss') not found in the returned metrics: ['train_loss']. HINT: Did you call self.log('val_loss', tensor) in the LightningModule?

    Do you know how I could fix this?

    opened by Alegzandra 0
  • LSTM autoencoder vs ANN autoencoder

    LSTM autoencoder vs ANN autoencoder

    Hi, This question is related to notebook 6 on ECG anomaly detection.

    I would like to know if there is any study on how much is the accuracy improvement on using LSTM autoencoder versus ANN encoder. Let me know if you have any info.

    Regards, Debapriya

    opened by debapriyamaji 0
  • Slightly different kind of labels for the input ( Multi-label Text Classification with BERT and PyTorch Lightning)

    Slightly different kind of labels for the input ( Multi-label Text Classification with BERT and PyTorch Lightning)

    Great work you did in the Multi-label Text Classification! Thanks!! I have a similar problem as you except that II only have one column for the labels (for example, you have [1,0,0,0,0,0] as label of the toxic classe but with my data I have only one value as the number of the class. For example with the first class I have 1 in the column and the number goes to 10). This causes a problem in the training (trainer.fit), the code keep telling me that the target size (torch.Size([16]) which is the epoch number) is different from the input size (which is torch.Size([16, 10]) and the 10 here is the number of the classes) Can you please tell me where can I make the changes so the code will run?

    opened by ma-batita 0
A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

A list of NLP(Natural Language Processing) tutorials built on Tensorflow 2.0.

Won Joon Yoo 335 Jan 4, 2023
Negative sampling for solving the unlabeled entity problem in NER. ICLR-2021 paper: Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition.

Negative Sampling for NER Unlabeled entity problem is prevalent in many NER scenarios (e.g., weakly supervised NER). Our paper in ICLR-2021 proposes u

Yangming Li 128 Dec 29, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 408 Dec 29, 2022
Develop open-source Python Arabic NLP libraries that the Arab world will easily use in all Natural Language Processing applications

Develop open-source Python Arabic NLP libraries that the Arab world will easily use in all Natural Language Processing applications

BADER ALABDAN 2 Oct 22, 2022
A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk.

Simple-Vosk A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk. Check out the official Vosk G

null 2 Jun 19, 2022
Neural-Machine-Translation - Implementation of revolutionary machine translation models

Neural Machine Translation Framework: PyTorch Repository contaning my implementa

Utkarsh Jain 1 Feb 17, 2022
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar

ASYML 726 Dec 30, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag helps humans intuitively express how they think about their files using tags and machine learning. Represent how you think using tags. Find what you look for using semantic search for your text documents (yes, even PDF's) and images.

Ravn Tech, Inc. 166 Jan 7, 2023
DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time

DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches the answers out of 60 billion phrases in Wikipedia, it is also very accurate having competitive accuracy with state-of-the-art open-domain QA models

Jinhyuk Lee 543 Jan 8, 2023
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

null 537 Jan 5, 2023
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Kuang Dada 6 Nov 8, 2022
Clone a voice in 5 seconds to generate arbitrary speech in real-time

This repository is forked from Real-Time-Voice-Cloning which only support English. English | 中文 Features ?? Chinese supported mandarin and tested with

Weijia Chen 25.6k Jan 6, 2023
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation This repository contains the implementation of the following paper: Live Speech

OldSix 575 Dec 31, 2022
Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

AI-BOT Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

Thempra 2 Dec 21, 2022
🚀Clone a voice in 5 seconds to generate arbitrary speech in real-time

English | 中文 Features ?? Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc. ?

Vega 25.6k Dec 31, 2022
A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode

Bloxflip Smart Bet A program that uses real statistics to choose the best times to bet on BloxFlip's crash gamemode. https://bloxflip.com/crash. THIS

null 43 Jan 5, 2023
Open Source Neural Machine Translation in PyTorch

OpenNMT-py: Open-Source Neural Machine Translation OpenNMT-py is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine trans

OpenNMT 5.8k Jan 4, 2023