Keras documentation, hosted live at keras.io

Overview

Keras.io documentation generator

This repository hosts the code used to generate the keras.io website.

Generating a local copy of the website

pip install -r requirements.txt
cd scripts
python autogen.py make
python autogen.py serve

If you have Docker (you don't need the gpu version of Docker), you can run instead:

docker build -t keras-io . && docker run --rm -p 8000:8000 keras-io

It will take a while the first time because it's going to pull the image and the dependencies, but on the next times it'll be much faster.

Another way of testing using Docker is via our Makefile:

make container-test

This command will build a Docker image with a documentation server and run it.

Call for examples

Are you interested in submitting new examples for publication on keras.io? We welcome your contributions! Please read the information below about adding new code examples.

We are currently interested in the following examples.

Adding a new code example

Keras code examples are implemented as tutobooks.

A tutobook is a script available simultaneously as a notebook, as a Python file, and as a nicely-rendered webpage.

Its source-of-truth (for manual edition and version control) is its Python script form, but you can also create one by starting from a notebook and converting it with the command nb2py.

Text cells are stored in markdown-formatted comment blocks. the first line (starting with """) may optionally contain a special annotation, one of:

  • shell: execute this block while prefixing each line with !.
  • invisible: do not render this block.

The script form should start with a header with the following fields:

Title: (title)
Author: (could be `Authors`: as well, and may contain markdown links)
Date created: (date in yyyy/mm/dd format)
Last modified: (date in yyyy/mm/dd format)
Description: (one-line text description)

To see examples of tutobooks, you can check out any .py file in examples/ or guides/.

Creating a new example starting from a ipynb file

  1. Save the ipynb file to local disk.
  2. Convert the file to a tutobook by running: (assuming you are in the scripts/ directory)
python tutobooks.py nb2py path_to_your_nb.ipynb ../examples/vision/script_name.py

This will create the file examples/vision/script_name.py.

  1. Open it, fill in the headers, and generally edit it so that it looks nice.

NOTE THAT THE CONVERSION SCRIPT MAY MAKE MISTAKES IN ITS ATTEMPTS TO SHORTEN LINES. MAKE SURE TO PROOFREAD THE GENERATED .py IN FULL. Or alternatively, make sure to keep your lines reasonably-sized (<90 char) to start with, so that the script won't have to shorten them.

  1. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  2. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  3. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Creating a new example starting from a Python script

  1. Format the script with black: black script_name.py
  2. Add tutobook header
  3. Put the script in the relevant subfolder of examples/ (e.g. examples/vision/script_name)
  4. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  5. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  6. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Previewing a new example

You can locally preview what the example looks like by running:

cd scripts
python autogen.py add_example vision/script_name

(Assuming the tutobook file is examples/vision/script_name.py.)

NOTE THAT THIS COMMAND WILL ERROR OUT IF ANY CELLS TAKES TOO LONG TO EXECUTE. In that case, make your code lighter/faster. Remember that examples are meant to demonstrate workflows, not train state-of-the-art models. They should stay very lightweight.

Then serving the website:

python autogen.py make
python autogen.py serve

And navigating to 0.0.0.0:8000/examples.

Read-only autogenerated files

The contents of the following folders should not be modified by hand:

  • site/*
  • sources/*
  • templates/examples/*
  • templates/guides/*
  • examples/*/md/*, examples/*/ipynb/*, examples/*/img/*
  • guides/md/*, guides/ipynb/*, guides/img/*

Modifiable files

These are the only files that should be edited by hand:

  • templates/*.md, with the exception of templates/examples/* and templates/guides/*
  • examples/*/*.py
  • guides/*.py
  • theme/*
  • scripts/*.py
Comments
  • Added example of Barlow Twins

    Added example of Barlow Twins

    This is a Keras implementation of Barlow Twins(contrastive SSL with redundancy reduction techniques) using the CIFAR-10 dataset

    • Demonstrates implementation of Barlow Twins
    • Contains a concise and clear explanation about the topic for people who don't know much about self-supervised learning

    This method can reach about 63-64% validation accuracy with linear evaluation.

    COLAB LINK

    cla: yes 
    opened by dewball345 33
  • Example on using RandAugment for training image classification models

    Example on using RandAugment for training image classification models

    RandAugment has been quite an important recipe in works like FixMatch, Noisy Student Training, EfficientNets. Yet, there's hardly any example that shows how to apply it specifically in the context of Keras models. This is the primary motivation behind this tutorial.

    Additionally, it might be of interest to folks working on improving the robustness of vision models. I have included a simple check that evaluates two different models on the CIFAR-10-C dataset (severity: saturated_5) where one model is trained with RandAugment and the other one is trained with simple augmentation transforms. As expected, the former yields better performance.

    opened by sayakpaul 26
  • No attribute 'image_dataset_from_directory'

    No attribute 'image_dataset_from_directory'

    Hi! I was going through the guide and I got the error: AttributeError: module 'tensorflow.keras.preprocessing' has no attribute 'image_dataset_from_directory'

    Here https://github.com/keras-team/keras-io/blob/a3bb3dc49b8eb0ebbe8a5d91329f8378eacdd7d4/examples/vision/image_classification_from_scratch.py#L83

    I have no idea how to fix it and why it's not working. Can you help me?

    Package versions:

    • keras 2.3.1
    • tensorflow 2.1.0
    opened by baksheev 26
  • Adding an example on video classification

    Adding an example on video classification

    There are many subtleties for training a well-performing video classifier. Also, there are many ways to train one. This example walks through one of them. Hopefully, it will be helpful for the community.

    cla: yes 
    opened by sayakpaul 23
  • Example on semi-supervised learning with SimCLR

    Example on semi-supervised learning with SimCLR

    This example demonstrates the usage of contrastive pretraining combined with supervised finetuning on the STL10 dataset. The performance of the pretrained + finetuned model exceeds that of its supervised baseline counterpart.

    Other features include: -custom image augmentation layers -sampling differently-sized batches from the zipped labeled and unlabeled datasets simultaneously

    A current issue is that the example exceeds the allowed line-count (386 vs 300). The implementation could be made somewhat shorter by removing training curve visualization and print() and model.summary() statements, they are however helpful for comparison with the baseline solution.

    cla: yes 
    opened by beresandras 22
  • Example on using CutMix Augmentation for Image Classification

    Example on using CutMix Augmentation for Image Classification

    Though there are many augmentation techniques like MixUp and CutOut techniques, there are some issues as well. It is because in CutOut Augmentation we remove a part(square) from a picture with either a Gaussian Noise or a Black pixel, which results in a decrease of important portions of an image during training. This can lead to a limitation in the case of CNN. In the case of MixUp Augmentation, the images which are generated are somewhere not natural and confuses the model, especially for the localization task.

    In this example how we use the CutMix Augmentation technique to overcome these issues. In this example, I have used the CIFAR10 dataset and the CutMix Augmentation technique performs better and yields better results than the simple one.

    cla: yes 
    opened by sayannath 22
  • Create an example for implementing the Vision Transformer (ViT) model

    Create an example for implementing the Vision Transformer (ViT) model

    Hello @fchollet - Kindly review this PR. I created an example to demonstrate the implementation of the Vision Transformer model on the CIFAR-100 classification dataset.

    opened by ksalama 22
  • New Code Example: EEG Signal Classification for Stimuli identification

    New Code Example: EEG Signal Classification for Stimuli identification

    This code example is for performing EEG Signal classification for stimuli identification using Convolutional Neural Networks. Hoping to make my first contribution of many to the wonderful examples stored here!

    Signed-off-by: Suvaditya Mukherjee [email protected]

    opened by suvadityamuk 21
  • How to save model after training?

    How to save model after training?

    I am following the Image similarity estimation using a Siamese Network with a triplet loss example.

    I see the model is trained here.

    siamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset)

    But the model is not saved after training. How to save the model after training so that we don't need to re-train the model from scratch during evaluation?

    Also I see the example mentions Writing a training loop from scratch. I am not sure what's the purpose of that. Can anyone please explain?

    opened by smith-co 20
  • Introduce keras-io documentation for KerasCV

    Introduce keras-io documentation for KerasCV

    We can submit this after the v0.1.0 release of KerasCV (I plan to do this in a week~ or so...)

    Feedback needed on the guides, and here is a Colab link so you can see the images in the guide: https://colab.research.google.com/drive/1t3lwXhA1guK3nwIyz1-kIs84OuZUMci5?usp=sharing

    I'll be adding a COCO metrics guide tomorrow.

    opened by LukeWood 20
  • add: an example on pointnet model for segmentation

    add: an example on pointnet model for segmentation

    Ccing @soumik12345 who is the primary author of this example.

    A Colab Notebook is available here for verification: https://colab.research.google.com/drive/1-_pB22ZIbYJM95JH-vkXzXlZ1mhsx9pB?usp=sharing.

    cla: no 
    opened by sayakpaul 18
  • Wrong parameter to MultiHeadAttention in example image_classification_with_vision_transformer

    Wrong parameter to MultiHeadAttention in example image_classification_with_vision_transformer

    In /examples/vision/image_classification_with_vision_transformer.py

    Shouldn't the key_dim parameter to MultiHeadAttention() be projection_dim / num_head instead of projection_dim?

    opened by hubtub2 0
  • The Encoder input doesn't use any Position Encoding?

    The Encoder input doesn't use any Position Encoding?

    Hi,

    I'm referring to the Transformer ASR code. The is supplied via SpeechFeatureEmbedding class. But it seems no Position Encoding was applied to the first Encoder input, unlike the original "Attention Is All You Need" and "Speech Transformer" architectures. Is that correct? If so, how does positional data added to the source spectrograms?

    https://github.com/keras-team/keras-io/blob/e40fab4a194a3eb8a211a0369683d6c50abdbc55/examples/audio/transformer_asr.py#L78

    opened by rshahamiriuoa 0
  • Adding

    Adding "TF Serving" Keras code example

    Hi as discussed a while ago at this thread, I am making this PR to add a Keras code example on how to serve models using TF Serving.

    A few notes on this notebook:

    • The goal here is to provide a simple example of teaching good practices and how to get started with TensorFlow/Keras and TFServing.
    • I was not able to fully reproduce the TFServing behavior dues to the constraints of Colab and the scripts that generates the tutobooks.
      • Colab does not support Docker, so I could not use the recommended Docker setup for TFServing.
      • The scripts that generate the tutobooks has some issues with running Docker services in the background, I also was not able to run background processes for a similar reason.

    Due to those issues running the TFServing service I created the notebook with all the Python code that I was able to run there, the rest of the code (TFServing-related) I had to write on Markdown text showing the code input and its expected outputs. I referenced a Colab notebook that I created and is able to run everything fully.

    I hope that those issues are not a problem and we can add this example, it took some time to get everything together, let me know if you have any suggestions.

    opened by dimitreOliveira 3
  • "Image classification from scratch" example has warnings and training runs incredibly slow

    Tensorflow 2.11, Anaconda 2.3.1, Windows 10 Image classification from scratch

    Training is incredibly slow. After 8 hours the first epoch was still running (125/147).

    Following warnings appear when executing this instruction:

    augmented_train_ds = train_ds.map( lambda x, y: (data_augmentation(x, training=True), y))

    WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.

    opened by EZeroDivide 0
  • "Image classification from scratch" doesn't work under Google Colab

    Dataset Generation does not work:

    ValueError Traceback (most recent call last)

    in 2 batch_size = 128 3 ----> 4 train_ds, val_ds = tf.keras.preprocessing.image_dataset_from_directory( 5 "PetImages", 6 validation_split=0.2,

    1 frames

    /usr/local/lib/python3.8/dist-packages/keras/utils/dataset_utils.py in check_validation_split_arg(validation_split, subset, shuffle, seed) 244 'If subset is set, validation_split must be set, and inversely.') 245 if subset not in ('training', 'validation', None): --> 246 raise ValueError('subset must be either "training" ' 247 'or "validation", received: %s' % (subset,)) 248 if validation_split and shuffle and seed is None:

    ValueError: subset must be either "training" or "validation", received: both

    opened by EZeroDivide 0
Owner
Keras
Deep Learning for humans
Keras
This repository is to support contributions for tools for the Project CodeNet dataset hosted in DAX

The goal of Project CodeNet is to provide the AI-for-Code research community with a large scale, diverse, and high quality curated dataset to drive innovation in AI techniques.

International Business Machines 1.2k Jan 4, 2023
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 1, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
Infrastructure as Code (IaC) for a self-hosted version of Gnosis Safe on AWS

Welcome to Yearn Gnosis Safe! Setting up your local environment Infrastructure Deploying Gnosis Safe Prerequisites 1. Create infrastructure for secret

Numan 16 Jul 18, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 9, 2022
LSUN Dataset Documentation and Demo Code

LSUN Please check LSUN webpage for more information about the dataset. Data Release All the images in one category are stored in one lmdb database fil

Fisher Yu 426 Jan 2, 2023
Genshin-assets - 👧 Public documentation & static assets for Genshin Impact data.

genshin-assets This repo provides easy access to the Genshin Impact assets, primarily for use on static sites. Sources Genshin Optimizer - An Artifact

Zerite Development 5 Nov 22, 2022
DL & CV-based indicator toolset for the vehicle drivers via live dash-cam footage.

Vehicle Indicator Toolset Deep Learning and Computer Vision based indicator toolset for vehicle drivers using live dash-cam footages. Tracking of vehi

Alex Xu 12 Dec 28, 2021
Tool for live presentations using manim

manim-presentation Tool for live presentations using manim Install pip install manim-presentation opencv-python Usage Use the class Slide as your sce

Federico Galatolo 146 Jan 6, 2023
Python implementation of a live deep learning based age/gender/expression recognizer

TUT live age estimator Python implementation of a live deep learning based age/gender/smile/celebrity twin recognizer. All components use convolutiona

Heikki Huttunen 80 Nov 21, 2022
LIVECell - A large-scale dataset for label-free live cell segmentation

LIVECell dataset This document contains instructions of how to access the data associated with the submitted manuscript "LIVECell - A large-scale data

Sartorius Corporate Research 112 Jan 7, 2023
This program was designed to detect whether someone is wearing a facemask through a live video stream.

This program was designed to detect whether someone is wearing a facemask through a live video stream. A custom lightweight CNN trained with TensorFlow on a public dataset provided by Kaggle is used to detect whether each face detected by the cv2 face detection dnn is wearing a mask

null 0 Apr 2, 2022