TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

Overview

TensorFlow-LiveLessons

Note that the second edition of this video series is now available here. The second edition contains all of the content from this (first) edition plus quite a bit more, as well as updated library versions.

This repository is home to the code that accompanies Jon Krohn's:

  1. Deep Learning with TensorFlow LiveLessons (summary blog post here)
  2. Deep Learning for Natural Language Processing LiveLessons (summary blog post here)
  3. Deep Reinforcement Learning and GANs LiveLessons (summary blog post here)

The above order is the recommended sequence in which to undertake these LiveLessons. That said, Deep Learning with TensorFlow provides a sufficient theoretical and practical background for the other LiveLessons.

Prerequisites

Command Line

Working through these LiveLessons will be easiest if you are familiar with the Unix command line basics. A tutorial of these fundamentals can be found here.

Python for Data Analysis

In addition, if you're unfamiliar with using Python for data analysis (e.g., the pandas, scikit-learn, matplotlib packages), the data analyst path of DataQuest will quickly get you up to speed -- steps one (Introduction to Python) and two (Intermediate Python and Pandas) provide the bulk of the essentials.

Installation

Step-by-step guides for running the code in this repository can be found in the installation directory.

Notebooks

All of the code that I cover in the LiveLessons can be found in this directory as Jupyter notebooks.

Below is the lesson-by-lesson sequence in which I covered them:

Deep Learning with TensorFlow LiveLessons

Lesson One: Introduction to Deep Learning

1.1 Neural Networks and Deep Learning
  • via analogy to their biological inspirations, this section introduces Artificial Neural Networks and how they developed to their predominantly deep architectures today
1.2 Running the Code in These LiveLessons
1.3 An Introductory Artificial Neural Network
  • get your hands dirty with a simple-as-possible neural network (shallow_net_in_keras.ipynb) for classifying handwritten digits
  • introduces Jupyter notebooks and their most useful hot keys
  • introduces a gentle quantity of deep learning terminology by whiteboarding through:
    • the MNIST digit data set
    • the preprocessing of images for analysis with a neural network
    • a shallow network architecture

Lesson Two: How Deep Learning Works

2.1 The Families of Deep Neural Nets and their Applications
  • talk through the function and popular applications of the predominant modern families of deep neural nets:
    • Dense / Fully-Connected
    • Convolutional Networks (ConvNets)
    • Recurrent Neural Networks (RNNs) / Long Short-Term Memory units (LSTMs)
    • Reinforcement Learning
    • Generative Adversarial Networks
2.2 Essential Theory I —- Neural Units
  • the following essential deep learning concepts are explained with intuitive, graphical explanations:
    • neural units and activation functions
2.3 Essential Theory II -- Cost Functions, Gradient Descent, and Backpropagation
2.4 TensorFlow Playground -- Visualizing a Deep Net in Action
2.5 Data Sets for Deep Learning
  • overview of canonical data sets for image classification and meta-resources for data sets ideally suited to deep learning
2.6 Applying Deep Net Theory to Code I
  • apply the theory learned throughout Lesson Two to create an intermediate-depth image classifier (intermediate_net_in_keras.ipynb)
  • builds on, and greatly outperforms, the shallow architecture from Section 1.3

Lesson Three: Convolutional Networks

3.1 Essential Theory III -- Mini-Batches, Unstable Gradients, and Avoiding Overfitting
  • add to our state-of-the-art deep learning toolkit by delving further into essential theory, specifically:
    • weight initialization
      • uniform
      • normal
      • Xavier Glorot
    • stochastic gradient descent
      • learning rate
      • batch size
      • second-order gradient learning
        • momentum
        • Adam
    • unstable gradients
      • vanishing
      • exploding
    • avoiding overfitting / model generalization
      • L1/L2 regularization
      • dropout
      • artificial data set expansion
    • batch normalization
    • more layers
      • max-pooling
      • flatten
3.2 Applying Deep Net Theory to Code II
  • apply the theory learned in the previous section to create a deep, dense net for image classification (deep_net_in_keras.ipynb)
  • builds on, and outperforms, the intermediate architecture from Section 2.5
3.3 Introduction to Convolutional Neural Networks for Visual Recognition
  • whiteboard through an intuitive explanation of what convolutional layers are and how they're so effective
3.4 Classic ConvNet Architectures -— LeNet-5
  • apply the theory learned in the previous section to create a deep convolutional net for image classification (lenet_in_keras.ipynb) that is inspired by the classic LeNet-5 neural network introduced in section 1.1
3.5 Classic ConvNet Architectures -— AlexNet and VGGNet
3.6 TensorBoard and the Interpretation of Model Outputs
  • return to the networks from the previous section, adding code to output results to the TensorBoard deep learning results-visualization tool
  • explore TensorBoard and explain how to interpret model results within it

Lesson Four: Introduction to TensorFlow

4.1 Comparison of the Leading Deep Learning Libraries
  • discuss the relative strengths, weaknesses, and common applications of the leading deep learning libraries:
    • Caffe
    • Torch
    • Theano
    • TensorFlow
    • and the high-level APIs TFLearn and Keras
  • conclude that, for the broadest set of applications, TensorFlow is the best option
4.2 Introduction to TensorFlow
4.3 Fitting Models in TensorFlow
4.4 Dense Nets in TensorFlow
4.5 Deep Convolutional Nets in TensorFlow
  • create a deep convolutional neural net (lenet_in_tensorflow.ipynb) in TensorFlow with an architecture identical to the LeNet-inspired one built in Keras in Section 3.4

Lesson Five: Improving Deep Networks

5.1 Improving Performance and Tuning Hyperparameters
  • detail systematic steps for improving the performance of deep neural nets, including by tuning hyperparameters
5.2 How to Built Your Own Deep Learning Project
  • specific steps for designing and evaluating your own deep learning project
5.3 Resources for Self-Study
  • topics worth investing time in to become an expert deployer of deep learning models


Deep Learning for Natural Language Processing

Lesson One: The Power and Elegance of Deep Learning for NLP

1.1 Introduction to Deep Learning for Natural Language Processing
  • high-level overview of deep learning as it pertains to Natural Language Processing (NLP)
  • influential examples of industrial applications of NLP
  • timeline of contemporary breakthroughs that have brought Deep Learning approaches to the forefront of NLP research and development
1.2 Computational Representations of Natural Language Elements
  • introduce the elements of natural language
  • contrast how these elements are represented by traditional machine-learning models and emergent deep-learning models
1.3 NLP Applications
  • specify common NLP applications and bucket them into three tiers of relative complexity
1.4 Installation, Including GPU Considerations
1.5 Review of Prerequisite Deep Learning Theory
1.6 A Sneak Peak
  • take a tantalising look ahead at the capabilities developed over the course of these LiveLessons

Lesson Two: Word Vectors

2.1 Vector-Space Embedding
  • leverage interactive demos to enable an intuitive understanding of vector-space embeddings of words, nuanced quantitative representations of word meaning
2.2 word2vec
  • key papers that led to the development of word2vec, a technique for transforming natural language into vector representations
  • essential word2vec theory introduced:
    • architectures:
      1. Skip-Gram
      2. Continuous Bag of Words
    • training algorithms:
      1. hierarchical softmax
      2. negative sampling
    • evaluation perspectives:
      1. intrinsic
      2. extrinsic
    • hyperparameters:
      1. number of dimensions
      2. context-word window size
      3. number of iterations
      4. size of data set
  • contrast word2vec with its leading alternative, GloVe
2.3 Data Sets for NLP
2.4 Creating Word Vectors with word2vec

Lesson Three: Modeling Natural Language Data

3.1 Best Practices for Preprocessing Natural Language Data
  • in natural_language_preprocessing_best_practices.ipynb, apply the following recommended best practices to clean up a corpus natural language data prior to modeling:
    • tokenize
    • convert all characters to lowercase
    • remove stopwords
    • remove punctuation
    • stem words
    • handle bigram (and trigram) word collocations
3.2 The Area Under the ROC Curve
  • detail the calculation and functionality of the area under the Receiver Operating Characteristic curve summary metric, which is used throughout the remainder of the LiveLessons for evaluating model performance
3.3 Dense Neural Network Classification
3.4 Convolutional Neural Network Classification

Lesson Four: Recurrent Neural Networks

4.1 Essential Theory of RNNs
  • provide an intuitive understanding of Recurrent Neural Networks (RNNs), which permit backpropagation through time over sequential data, such as natural language and financial time series data
4.2 RNNs in Practice
  • incorporate simple RNN layers into a model that classifies documents by their sentiment (rnn_in_keras.ipynb
4.3 Essential Theory of LSTMs and GRUs
  • develop familiarity with the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) varieties of RNNs which provide markedly more productive modeling of sequential data with deep learning models
4.4 LSTMs and GRUs in Practice

Lesson Five: Advanced Models

5.1 Bi-Directional LSTMs
  • Bi-directional LSTMs are an especially potent variant of the LSTM
  • high-level theory on Bi-LSTMs before leveraging them in practice (bidirectional_lstm.ipynb)
5.2 Stacked LSTMs
5.3 Parallel Network Architectures
  • advanced data modeling capabilities are possible with non-sequential architectures, e.g., parallel convolutional layers, each with unique hyperparameters (multi_convnet_architectures.ipynb)


Deep Reinforcement Learning and GANs

Lesson One: The Foundations of Artificial Intelligence

1.1 The Contemporary State of AI
  • examine what the term "Artificial Intelligence" means and how it relates to deep learning
  • define narrow, general, and super intelligence
1.2 Applications of Generative Adversarial Networks
  • uncover the rapidly-improving quality of Generative Adversarial Networks for creating compelling novel imagery in the style of humans
  • involves the fun, interactive pix2pix tool
1.3 Applications of Deep Reinforcement Learning
  • distinguish supervised and unsupervised learning from reinforcement learning
  • provide an overview of the seminal contemporary deep reinforcement learning breakthroughs, including:
    • the Deep Q-Learning algorithm
    • AlphaGo
    • AlphaGo Zero
    • AlphaZero
    • robotics advances
  • introduce the most popular deep reinforcement learning environments:
1.4 Running the Code in these LiveLessons
1.5 Review of Prerequisite Deep Learning Theory

Lesson Two: Generative Adversarial Networks (GANs)

2.1 Essential GAN Theory
  • cover the high-level theory of what GANs are and how they are able to generate realistic-looking images
2.2 The “Quick, Draw!” Game Dataset
  • show the Quick, Draw! game, which we use as the source of hundreds of thousands of hand-drawn images from a single class for a GAN to learn to imitate
2.3 A Discriminator Network
2.4 A Generator Network
2.5 Training an Adversarial Network

Lesson Three: Deep Q-Learning Networks (DQNs)

3.1 The Cartpole Game
  • introduce the Cartpole Game, an environment provided by OpenAI and used throughout the remainder these LiveLessons to train deep reinforcement learning algorithms
3.2 Essential Deep RL Theory
  • delve into the essential theory of deep reinforcement learning in general
3.3 Essential DQN Theory
  • delve into the essential theory of Deep Q-Learning networks, a popular, particular type of deep reinforcement learning algorithm
3.4 Defining a DQN Agent
3.5 Interacting with an OpenAI Gym Environment
  • leverage OpenAI Gym to enable our Deep Q-Learning agent to master the Cartpole Game (cartpole_dqn.ipynb completed)

Lesson Four: OpenAI Lab

4.1 Visualizing Agent Performance
  • use the OpenAI Lab to visualise our Deep Q-Learning agent's performance in real-time
4.2 Modifying Agent Hyperparameters
  • learn to straightforwardly optimise a deep reinforcement learning agent's hyperparameters
4.3 Automated Hyperparameter Experimentation and Optimization
  • automate the search through hyperparameters to optimize our agent’s performance
4.4 Fitness
  • calculate summary metrics to gauge our agent’s overall fitness

Lesson Five: Advanced Deep Reinforcement Learning Agents

5.1 Policy Gradients and the REINFORCE Algorithm
  • at a high level, discover Policy Gradient algorithms in general and the classic REINFORCE implementation in particular
5.2 The Actor-Critic Algorithm
  • cover how Policy Gradients can be combined with Deep Q-Learning to facilitate the Actor-Critic algorithms
5.3 Software 2.0
  • discuss how deep learning is ushering in a new era of software development driven by data in place of hard-coded rules
5.4 Approaching Artificial General Intelligence
  • return to our discussion of Artificial Intelligence, specifically addressing the limitations of modern deep learning approaches


Comments
  • cannot build the docker file in windows

    cannot build the docker file in windows

    Hi I want to build the docker file in windows But I get this error

    The environment is inconsistent, please check the package plan carefully
    The following packages are causing the inconsistency:
    
      - conda-forge/noarch::seaborn==0.9.0=py_0
      - conda-forge/linux-64::matplotlib==3.0.3=py37_1
    failed
    
    SpecsConfigurationConflictError: Requested specs conflict with configured specs.
      requested specs:
        - tensorflow=1.0
      pinned specs:
        - python=3.7
    Use 'conda config --show-sources' to look for 'pinned_specs' and 'track_features'
    configuration parameters.  Pinned specs may also be defined in the file
    /opt/conda/conda-meta/pinned.
    
    
    The command '/bin/sh -c conda install --quiet --yes 'tensorflow=1.0*'' returned a non-zero code: 1
    

    can you help me with this?

    opened by mohamadk 13
  • Could not see any notebooks after mounting the local path

    Could not see any notebooks after mounting the local path

    sudo docker run -v ~/TensorFlow-LiveLessons:/home/jovyan/work -it --rm -p 8888:8888 tensorflow-ll-stack
    Execute the command: jupyter notebook [I 01:05:51.698 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret [W 01:05:51.718 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. [I 01:05:51.751 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab [I 01:05:51.755 NotebookApp] Serving notebooks from local directory: /home/jovyan [I 01:05:51.755 NotebookApp] 0 active kernels [I 01:05:51.755 NotebookApp] The Jupyter Notebook is running at: [I 01:05:51.755 NotebookApp] http://[all ip addresses on your system]:8888/?token=8584b69e338ae3695f1f2a105b46f57cfc2cefa9f4138783 [I 01:05:51.756 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 01:05:51.756 NotebookApp]

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=8584b69e338ae3695f1f2a105b46f57cfc2cefa9f4138783
    

    [W 01:06:04.186 NotebookApp] Forbidden [W 01:06:04.186 NotebookApp] 403 GET /api/sessions?=1510016650393 (x.x.x.x) 2.21ms referer=http://kahlobuild.cloudapp.net:8888/tree [W 01:06:04.188 NotebookApp] Forbidden [W 01:06:04.188 NotebookApp] 403 GET /api/terminals?=1510016650394 (x.x.x.x) 1.20ms referer=http://kahlobuild.cloudapp.net:8888/tree

    It is a empty and not listing anything from Jupyter. Notebook list is empty

    image

    It is not serving notebooks from /home/jovyan/work, instead it serves from /home/jovyan

    opened by gbpnkans 10
  • this container no longer works

    this container no longer works

    for whatever reasons jupyter is not installed in it:

    Executing the command: jupyter notebook Traceback (most recent call last): File "/opt/conda/bin/jupyter-notebook", line 4, in import notebook.notebookapp ImportError: No module named notebook.notebookapp

    opened by noiseoverip 9
  • Windows installation

    Windows installation

    I'd like to share my experience with installing TensorFlow on Windows, which was actually a piece of cake since the 1.0 release. I thought about making a pull-request, but maybe you have your reasons for not elaborating.

    Probably the most important part for any Windows user is to download a Python distribution which already has built-in support for the scientific Python stack. I used Anaconda, but there are WinPython and Enthought Canopy (for Python 3). From there, it's just following the instructions at TensorFlow's page.

    In short, given Anaconda is installed and in the user's path (happens by default):

    C:\> conda create -n tf python=3.5
    ...
    C:\> activate tf
    (tf) C:\> pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl
    ...
    (tf) C:\> conda install jupyter matplotlib (etc.)
    

    To use TF with a GPU, the headache is more from the CUDA side, but not too much:

    1. Install Visual Studio Community 2015, making sure C++ is part of the installation.
    2. Install CUDA 8
    3. Download cuDNN 5.1 (which isn't the latest version) and drop the files from the zip into where you installed CUDA.
    4. Same instructions as above, but use https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl instead.

    Hope this helps :smiley:

    opened by beneyal 6
  • Issue while following the step by step windows docker installation

    Issue while following the step by step windows docker installation

    Hi

    I tried to follow the steps specified in the link --> TensorFlow-LiveLessons/installation/step_by_step_Windows_Docker_install.md.

    However, I got the following error while executing the step # 7 : When that build process has finished, run the Docker container by executing docker run -v c:/full/path/to/the/clone:/home/jovyan/work -it --rm -p 8888:8888 tensorflow-ll-stack.

    The error message is :

    commad I executed : PS D:\Tensor_NLP> docker run -it --rm -p 8888:8888 tensorflow-ll-stack

    Error Message I got :

    /usr/local/bin/start-notebook.sh: ignoring /usr/local/bin/start-notebook.d/*

    Container must be run with group "root" to update passwd file Executing the command: jupyter notebook Traceback (most recent call last): File "/opt/conda/bin/jupyter-notebook", line 4, in import notebook.notebookapp ImportError: No module named notebook.notebookapp

    Please let me know where I am going wrong.

    Thanks Deepa

    opened by drajesh-tech 4
  • How to avoid inf value while feature extracting in Matlab?

    How to avoid inf value while feature extracting in Matlab?

    I’m trying to extract features from some EEG signals. One of the trails in my for loop creates inf value and emd doesn’t accept it so it causes an error! Would you please tell me how I can fix it?! Thanks a lot in advance. My code:

    fs = 250;

    idSubject = 1:9; idTraining = 1:3;

    for ii = idSubject disp(ii) for jj = idTraining disp(jj) [signal,H] = sload(['B0' num2str(ii) '0' num2str(jj) 'T.gdf']);

        s = 1;
        for ch = 1:3
            Signal(:,s) = signal(:,ch);
            s = s + 1;
        end
    
        Signal = Signal';
        SignalMean = nanmean(Signal);
        SignalMean = SignalMean';
    
        LabelLeft = find(H.Classlabel == 1);  %Signal of left hand
        LabelLeft = LabelLeft';
        LabelRight = find(H.Classlabel == 2); %Signal of right hand
        LabelRight = LabelRight';
    
        s = 1;
        for i = LabelLeft
            SignalL = SignalMean(H.TRIG(i)+fs:H.TRIG(i)+(3*fs)-1,1);
            SignalLeft{s,:} = emd(SignalL,'MAXMODES',4);
            s = s+1;
    
            clear SignalL
        end
        SignalLeft = cell2mat(SignalLeft);
        sLeftall{jj,:} = SignalLeft;
    
        s = 1;
        for i = LabelRight
            SignalR = SignalMean(H.TRIG(i)+fs:H.TRIG(i)+(3*fs)-1,1);
            SignalRight{s,:} = emd(SignalR,'MAXMODES',4);
            s = s+1;
    
            clear SignalR
        end
        SignalRight = cell2mat(SignalRight);
        sRightall{jj,:} = SignalRight;
        
        clear SignalLeft SignalRight signal Signal SignalMean H LabelLeft LabelRight      
    end
    sLeftall = cell2mat(sLeftall);
    sRightall = cell2mat(sRightall);
    
    signalLeftAll{ii,:} = sLeftall;
    signalRightAll{ii,:} = sLeftall;
    
    clear sLeftall sRightall
    

    end

    opened by jemappelleMarie 4
  • AttributeError: module 'tornado.web' has no attribute 'asynchronous'

    AttributeError: module 'tornado.web' has no attribute 'asynchronous'

    I receive the error below from the server whenever I try to navigate to any notebooks through the Jupyter UI. A 500: Internal Server Error message also appears in a new browser tab.

    I originally thought it may be an issue with the TensorFlow1.0 installation so I tried again with just 'tensorflow' but the error persists.

    [E 18:51:28.099 NotebookApp] Uncaught exception GET /notebooks/work/notebooks/cartpole_dqn.ipynb (172.17.0.1) HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='GET', uri='/notebooks/work/notebooks/cartpole_dqn.ipynb', version='HTTP/1.1', remote_ip='172.17.0.1') Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/tornado/web.py", line 1697, in _execute result = method(*self.path_args, **self.path_kwargs) File "/opt/conda/lib/python3.6/site-packages/tornado/web.py", line 3174, in wrapper return method(self, args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/notebook/notebook/handlers.py", line 59, in get get_custom_frontend_exporters=get_custom_frontend_exporters File "/opt/conda/lib/python3.6/site-packages/notebook/base/handlers.py", line 467, in render_template return template.render(**ns) File "/opt/conda/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render return original_render(self, args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/opt/conda/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/opt/conda/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "/opt/conda/lib/python3.6/site-packages/notebook/templates/notebook.html", line 1, in top-level template code {% extends "page.html" %} File "/opt/conda/lib/python3.6/site-packages/notebook/templates/page.html", line 154, in top-level template code {% block header %} File "/opt/conda/lib/python3.6/site-packages/notebook/templates/notebook.html", line 120, in block "header" {% for exporter in get_custom_frontend_exporters() %} File "/opt/conda/lib/python3.6/site-packages/notebook/notebook/handlers.py", line 19, in get_custom_frontend_exporters from nbconvert.exporters.base import get_export_names, get_exporter File "/opt/conda/lib/python3.6/site-packages/nbconvert/init.py", line 7, in from . import postprocessors File "/opt/conda/lib/python3.6/site-packages/nbconvert/postprocessors/init.py", line 5, in from .serve import ServePostProcessor File "/opt/conda/lib/python3.6/site-packages/nbconvert/postprocessors/serve.py", line 19, in class ProxyHandler(web.RequestHandler): File "/opt/conda/lib/python3.6/site-packages/nbconvert/postprocessors/serve.py", line 21, in ProxyHandler @web.asynchronous AttributeError: module 'tornado.web' has no attribute 'asynchronous' [E 18:51:28.115 NotebookApp] { "Host": "127.0.0.1:8888", "Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", "Dnt": "1", "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8", "Referer": "http://127.0.0.1:8888/tree/work/notebooks", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.9,en-GB;q=0.8", "Cookie": "_xsrf=2|8451cf07|677ca621c29f86acbc87464ba9e92f25|1551993854; username-127-0-0-1-8888="2|1:0|10:1552071082|23:username-127-0-0-1-8888|44:NzAzZmJjMGNmNWE0NDBkNDhiZDA1NTAzMmU4YTY5YmI=|64ce7954a09e974e5014ed803858785b88b1a04e0d99c8f796a0f7bdc8e32410"" } [E 18:51:28.115 NotebookApp] 500 GET /notebooks/work/notebooks/cartpole_dqn.ipynb (172.17.0.1) 176.96ms referer=http://127.0.0.1:8888/tree/work/notebooks

    opened by ghost 4
  • intermediate_net_in_tensorflow

    intermediate_net_in_tensorflow

    Maybe a trivial question, but I don't understand why do we "fetch" the optimizer.

    # feed batch data to run optimization and fetching cost and accuracy: 
     _, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct], 
                                             feed_dict={x: batch_x, y: batch_y})
    

    Happy to hear your thoughts about this. Thanks !

    opened by jhagege 3
  • getting error while installing tensorflow

    getting error while installing tensorflow

    On running the following command ( on Mac) sudo docker build -t tensorflow-ll-stack .

    getting the following exception Step 5/9 : RUN pip install tflearn==0.3.2 ---> Running in 3d5758f37abe Collecting tflearn==0.3.2 Downloading tflearn-0.3.2.tar.gz (98kB) Requirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from tflearn==0.3.2) Requirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from tflearn==0.3.2) Requirement already satisfied: Pillow in /opt/conda/lib/python3.6/site-packages (from tflearn==0.3.2) Exception: Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2533, in _dep_map return self.__dep_map File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2608, in getattr raise AttributeError(attr) AttributeError: _Distribution__dep_map

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/packaging/requirements.py", line 92, in init req = REQUIREMENT.parseString(requirement_string) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1617, in parseString raise exc File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1607, in parseString loc, tokens = self._parse( instring, 0 ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1379, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 3376, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1379, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 3698, in parseImpl return self.expr._parse( instring, loc, doActions, callPreParse=False ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1379, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 3359, in parseImpl loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1383, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 2670, in parseImpl raise ParseException(instring, loc, self.errmsg, self) pip._vendor.pyparsing.ParseException: Expected W:(abcd...) (at char 0), (line:1, col:1)

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2874, in init super(Requirement, self).init(requirement_string) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/packaging/requirements.py", line 96, in init requirement_string[e.loc:e.loc + 8])) pip._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'\x00\x00\x00\x00\x00\x00\x00\x00'"

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/opt/conda/lib/python3.6/site-packages/pip/commands/install.py", line 335, in run wb.build(autobuilding=True) File "/opt/conda/lib/python3.6/site-packages/pip/wheel.py", line 749, in build self.requirement_set.prepare_files(self.finder) File "/opt/conda/lib/python3.6/site-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/opt/conda/lib/python3.6/site-packages/pip/req/req_set.py", line 699, in _prepare_file set(req_to_install.extras) - set(dist.extras) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2757, in extras return [dep for dep in self._dep_map if dep] File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2547, in _dep_map dm.setdefault(extra, []).extend(parse_requirements(reqs)) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2867, in parse_requirements yield Requirement(line) File "/opt/conda/lib/python3.6/site-packages/pip/_vendor/pkg_resources/init.py", line 2876, in init raise RequirementParseError(str(e)) pip._vendor.pkg_resources.RequirementParseError: Invalid requirement, parse error at "'\x00\x00\x00\x00\x00\x00\x00\x00'" The command '/bin/sh -c pip install tflearn==0.3.2' returned a non-zero code: 2

    opened by vikramkon 3
  • Notebook kernel continuously reconnecting

    Notebook kernel continuously reconnecting

    Hi, I followed the GPU installation instructions on an Ubuntu 16.04 machine with an Nvidia GPU installed properly ( I used it for other things and it seems to work).

    After running the container using

    sudo nvidia-docker run -v ~/Learning/TensorFlow-LiveLessons:/home/jovyan/work -it --rm -p 8888:8888 tfll-gpu-stack

    The kernel connection keeps flashing back and forth between connected and reconnecting and in the terminal, I'm getting repeated messages of Adapting to protocol v5.1 for kernel 8039... with no apparent error message.

    Any ideas how to debug this? This prevents me from running any notebook and properly following your course. Thanks Tomer

    opened by tomersagi 3
  • DQN learning is very slow in comparison to training video

    DQN learning is very slow in comparison to training video

    Hi, just wondering what might be the issue that the notebook https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/cartpole_dqn.ipynb is learning so fast in the training video, whereas whether I run the same notebook on my local machine or colab env with GPU support it is dramatically slower (1 episode per 5s)? In the training video it reaches hundreds of episodes in within seconds.

    Thanks in advance for any hints.

    opened by jahu 2
Owner
Deep Learning Study Group
Deep Learning Study Group
Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Packt 1.5k Jan 3, 2023
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 2, 2023
Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

Omid Alemi 55 Dec 29, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
Open-AI's DALL-E for large scale training in mesh-tensorflow.

DALL-E in Mesh-Tensorflow [WIP] Open-AI's DALL-E in Mesh-Tensorflow. If this is similarly efficient to GPT-Neo, this repo should be able to train mode

EleutherAI 432 Dec 16, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.5k Feb 12, 2021
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Tensorpack 6.2k Jan 1, 2023
TensorFlow-based neural network library

Sonnet Documentation | Examples Sonnet is a library built on top of TensorFlow 2 designed to provide simple, composable abstractions for machine learn

DeepMind 9.5k Jan 7, 2023
TensorFlow ROCm port

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

ROCm Software Platform 622 Jan 9, 2023
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 1, 2023
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 3, 2023
Python scripts to detect faces in Python with the BlazeFace Tensorflow Lite models

Python scripts to detect faces using Python with the BlazeFace Tensorflow Lite models. Tested on Windows 10, Tensorflow 2.4.0 (Python 3.8).

Ibai Gorordo 46 Nov 17, 2022
Human head pose estimation using Keras over TensorFlow.

RealHePoNet: a robust single-stage ConvNet for head pose estimation in the wild.

Rafael Berral Soler 71 Jan 5, 2023
TensorFlow 2 AI/ML library wrapper for openFrameworks

ofxTensorFlow2 This is an openFrameworks addon for the TensorFlow 2 ML (Machine Learning) library

Center for Art and Media Karlsruhe 96 Dec 31, 2022
Implements Gradient Centralization and allows it to use as a Python package in TensorFlow

Gradient Centralization TensorFlow This Python package implements Gradient Centralization in TensorFlow, a simple and effective optimization technique

Rishit Dagli 101 Nov 1, 2022