A playable implementation of Fully Convolutional Networks with Keras.

Overview

keras-fcn

Build Status codecov License: MIT

A re-implementation of Fully Convolutional Networks with Keras

Installation

Dependencies

  1. keras
  2. tensorflow

Install with pip

$ pip install git+https://github.com/JihongJu/keras-fcn.git

Build from source

$ git clone https://github.com/JihongJu/keras-fcn.git
$ cd keras-fcn
$ pip install --editable .

Usage

FCN with VGG16

from keras_fcn import FCN
fcn_vgg16 = FCN(input_shape=(500, 500, 3), classes=21,  
                weights='imagenet', trainable_encoder=True)
fcn_vgg16.compile(optimizer='rmsprop',
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
fcn_vgg16.fit(X_train, y_train, batch_size=1)

FCN with VGG19

from keras_fcn import FCN
fcn_vgg19 = FCN_VGG19(input_shape=(500, 500, 3), classes=21,  
                      weights='imagenet', trainable_encoder=True)
fcn_vgg19.compile(optimizer='rmsprop',
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
fcn_vgg19.fit(X_train, y_train, batch_size=1)

Custom FCN (VGG16 as an example)

from keras.layers import Input
from keras.models import Model
from keras_fcn.encoders import Encoder
from keras_fcn.decoders import VGGUpsampler
from keras_fcn.blocks import (vgg_conv, vgg_fc)
inputs = Input(shape=(224, 224, 3))
blocks = [vgg_conv(64, 2, 'block1'),
          vgg_conv(128, 2, 'block2'),
          vgg_conv(256, 3, 'block3'),
          vgg_conv(512, 3, 'block4'),
          vgg_conv(512, 3, 'block5'),
          vgg_fc(4096)]
encoder = Encoder(inputs, blocks, weights='imagenet',
                  trainable=True)
feat_pyramid = encoder.outputs   # A feature pyramid with 5 scales
feat_pyramid = feat_pyramid[:3]  # Select only the top three scale of the pyramid
feat_pyramid.append(inputs)      # Add image to the bottom of the pyramid


outputs = VGGUpsampler(feat_pyramid, scales=[1, 1e-2, 1e-4], classes=21)
outputs = Activation('softmax')(outputs)

fcn_custom = Model(inputs=inputs, outputs=outputs)

And implement a custom Fully Convolutional Network becomes simply define a series of convolutional blocks that one stacks on top of another.

Custom decoders

from keras_fcn.blocks import vgg_upsampling
from keras_fcn.decoders import Decoder
decode_blocks = [
vgg_upsampling(classes=21, target_shape=(None, 14, 14, None), scale=1),            
vgg_upsampling(classes=21, target_shape=(None, 28, 28, None),  scale=0.01),
vgg_upsampling(classes=21, target_shape=(None, 224, 224, None),  scale=0.0001)
]
outputs = Decoder(feat_pyramid[-1], decode_blocks)

The decode_blocks can be customized as well.

from keras_fcn.layers import BilinearUpSampling2D

def vgg_upsampling(classes, target_shape=None, scale=1, block_name='featx'):
    """A VGG convolutional block with bilinear upsampling for decoding.

    :param classes: Integer, number of classes
    :param scale: Float, scale factor to the input feature, varing from 0 to 1
    :param target_shape: 4D Tuples with targe_height, target_width as
    the 2nd, 3rd elements if `channels_last` or as the 3rd, 4th elements if
    `channels_first`.

    >>> from keras_fcn.blocks import vgg_upsampling
    >>> feat1, feat2, feat3 = feat_pyramid[:3]
    >>> y = vgg_upsampling(classes=21, target_shape=(None, 14, 14, None),
    >>>                    scale=1, block_name='feat1')(feat1, None)
    >>> y = vgg_upsampling(classes=21, target_shape=(None, 28, 28, None),
    >>>                    scale=1e-2, block_name='feat2')(feat2, y)
    >>> y = vgg_upsampling(classes=21, target_shape=(None, 224, 224, None),
    >>>                    scale=1e-4, block_name='feat3')(feat3, y)

    """
    def f(x, y):
        score = Conv2D(filters=classes, kernel_size=(1, 1),
                       activation='linear',
                       padding='valid',
                       kernel_initializer='he_normal',
                       name='score_{}'.format(block_name))(x)
        if y is not None:
            def scaling(xx, ss=1):
                return xx * ss
            scaled = Lambda(scaling, arguments={'ss': scale},
                            name='scale_{}'.format(block_name))(score)
            score = add([y, scaled])
        upscore = BilinearUpSampling2D(
            target_shape=target_shape,
            name='upscore_{}'.format(block_name))(score)
        return upscore
    return f

Try Examples

  1. Download VOC2011 dataset
$ wget "http://host.robots.ox.ac.uk/pascal/VOC/voc2011/VOCtrainval_25-May-2011.tar"
$ tar -xvzf VOCtrainval_25-May-2011.tar
$ mkdir ~/Datasets
$ mv TrainVal/VOCdevkit/VOC2011 ~/Datasets
  1. Mount dataset from host to container and start bash in container image

From repository keras-fcn

$ nvidia-docker run -it --rm -v `pwd`:/root/workspace -v ${Home}/Datasets/:/root/workspace/data jihong/keras-gpu bash

or equivalently,

$ make bash
  1. Within the container, run the following codes.
$ cd ~/workspace
$ pip setup.py -e .
$ cd voc2011
$ python train.py

More details see source code of the example in Training Pascal VOC2011 Segmention

Model Architecture

FCN8s with VGG16 as base net:

fcn_vgg16

TODO

  • Add ResNet
Comments
  • How do I test the picture

    How do I test the picture

    I have completed the training of the model. So I want to test the picture. So I use the model.predict(x). I get the shape with(1,500,500,21) , but any element in shape is NaN。 So how do I test it.

    opened by ghost 7
  • Can you give me an example how to test it

    Can you give me an example how to test it

    Hi, I am new to this. I have run trian.py and preseve the model. But I do not know how to use this model to detect my own image. I will appreciate it if you can give me an example that I can run directly.

    opened by caojinmeng 4
  • ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (250, 250, 3)

    ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (250, 250, 3)

    Curious what I might be doing wrong in this initialization: I am copying this from the README and I get the following error when run:

    ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (250, 250, 3)

    it must be in how I am loading my images?

    fcn_vgg16 = FCN(
            input_shape=(250, 250, 3),
            classes=3,
            weights=None,
            trainable_encoder=True
        )
    
        fcn_vgg16.compile(
            optimizer='adam',
            loss='categorical_crossentropy',
            metrics=['accuracy']
        )
    
        fcn_vgg16.fit_generator(training_dataset(), verbose=2, steps_per_epoch=1500, max_queue_size=10, epochs=1)
    

    training_dataset loads the input images as follows:

    img_input = img_to_array(load_img(path_input))
    img_target = img_to_array(load_img(path_target))
    yield (img_input, img_target)
    
    # (Pdb++) img_input.shape
    # (250, 250, 3)
    # (Pdb++) img_target.shape
    # (250, 250, 3)
    
    

    where img_to_array and load_img are imported from

    from keras.preprocessing.image import (
        load_img,
        img_to_array,
        array_to_img
    )
    

    I think I'm not passing the batch_size properly, which means I'm misunderstand Keras's fit_generator(...) requirements

    opened by theladyjaye 3
  • the predicted result is nan

    the predicted result is nan

    HI

    I run your program and find that parameters in some layers are ## nan, then I run the program with mse, it seems ok. Is there something wrong with the crossentropy?

    opened by kaijie-qin 3
  • error:ValueError: output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: None

    error:ValueError: output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: None

    Hi, thanks for keras-fcn code.

    after run train.py these error occur . keras version is 2 and run code in widows with python 3.5. and pascal voc 2011 is in data folder

    Using TensorFlow backend. Epoch 1/100 Exception in thread Thread-1: Traceback (most recent call last): File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\threading.py", line 914, in _bootstrap_inner self.run() File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 612, in data_generator_task generator_output = next(self._generator) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\preprocessing\image.py", line 732, in next return self.next(*args, **kwargs) File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\voc_generator.py", line 113, in next x = self.image_set_loader.load_img(fn) File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\voc_generator.py", line 203, in load_img raise IOError('Image {} does not exist.'.format(img_path)) OSError: Image ../data/VOC2011/JPEGImages/b'2009_002423'.jpg does not exist.

    Traceback (most recent call last): File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\train.py", line 88, in callbacks=[lr_reducer, early_stopper, csv_logger]) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\models.py", line 1124, in fit_generator initial_epoch=initial_epoch) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1877, in fit_generator str(generator_output)) ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

    opened by jeansely 2
  • VOC2011 training ends up with unchanged acc

    VOC2011 training ends up with unchanged acc

    hi @JihongJu

    Thanks for sharing fcn program.

    I followed your instruction to train voc2011. the intermediate results is shown below. Since three epoch, the loss did not change too much. After ten epoch, the acc even become constant.

    I trained with batch_size of 1 or 4, which leads to no change at all. Also, I tried training with or without weights, and still no change.

    Epoch 1/100 278/278 [==============================] - 82s 295ms/step - loss: 14.3584 - acc: 0.7594 - val_loss: 3.5362 - val_acc: 0.7539

    Epoch 00001: val_loss improved from inf to 3.53623, saving model to ./model/pascal.hdf5 Epoch 2/100 278/278 [==============================] - 80s 287ms/step - loss: 2.5873 - acc: 0.7622 - val_loss: 2.0838 - val_acc: 0.7539

    Epoch 00002: val_loss improved from 3.53623 to 2.08382, saving model to ./model/pascal.hdf5 Epoch 3/100 278/278 [==============================] - 80s 288ms/step - loss: 1.8258 - acc: 0.7622 - val_loss: 1.6789 - val_acc: 0.7539

    Epoch 00003: val_loss improved from 2.08382 to 1.67890, saving model to ./model/pascal.hdf5 Epoch 4/100 278/278 [==============================] - 80s 288ms/step - loss: 1.5551 - acc: 0.7622 - val_loss: 1.4830 - val_acc: 0.7539

    Epoch 00004: val_loss improved from 1.67890 to 1.48300, saving model to ./model/pascal.hdf5 Epoch 5/100 278/278 [==============================] - 80s 287ms/step - loss: 1.4069 - acc: 0.7622 - val_loss: 1.4185 - val_acc: 0.7539 Epoch 00005: val_loss improved from 1.48300 to 1.41854, saving model to ./model/pascal.hdf5 Epoch 6/100 278/278 [==============================] - 80s 287ms/step - loss: 1.3249 - acc: 0.7622 - val_loss: 1.3197 - val_acc: 0.7539

    Epoch 00006: val_loss improved from 1.41854 to 1.31969, saving model to ./model/pascal.hdf5 Epoch 7/100 278/278 [==============================] - 80s 287ms/step - loss: 1.2747 - acc: 0.7622 - val_loss: 1.2816 - val_acc: 0.7539

    Epoch 00007: val_loss improved from 1.31969 to 1.28161, saving model to ./model/pascal.hdf5 Epoch 8/100 278/278 [==============================] - 79s 285ms/step - loss: 1.2439 - acc: 0.7622 - val_loss: 1.2859 - val_acc: 0.7539

    Epoch 00008: val_loss did not improve from 1.28161 Epoch 9/100 278/278 [==============================] - 80s 286ms/step - loss: 1.2223 - acc: 0.7622 - val_loss: 1.2398 - val_acc: 0.7539

    Epoch 00009: val_loss improved from 1.28161 to 1.23977, saving model to ./model/pascal.hdf5 Epoch 10/100 278/278 [==============================] - 79s 286ms/step - loss: 1.2084 - acc: 0.7622 - val_loss: 1.2319 - val_acc: 0.7539

    Epoch 00010: val_loss improved from 1.23977 to 1.23188, saving model to ./model/pascal.hdf5 Epoch 11/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1997 - acc: 0.7622 - val_loss: 1.2195 - val_acc: 0.7539

    Epoch 00011: val_loss improved from 1.23188 to 1.21950, saving model to ./model/pascal.hdf5 Epoch 12/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1930 - acc: 0.7622 - val_loss: 1.2187 - val_acc: 0.7539

    Epoch 00012: val_loss improved from 1.21950 to 1.21867, saving model to ./model/pascal.hdf5 Epoch 13/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1903 - acc: 0.7622 - val_loss: 1.2112 - val_acc: 0.7539

    Epoch 00013: val_loss improved from 1.21867 to 1.21119, saving model to ./model/pascal.hdf5 Epoch 14/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1894 - acc: 0.7622 - val_loss: 1.2606 - val_acc: 0.7539

    Epoch 00014: val_loss did not improve from 1.21119 Epoch 15/100 278/278 [==============================] - 79s 286ms/step - loss: 1.1894 - acc: 0.7622 - val_loss: 1.2147 - val_acc: 0.7539

    Epoch 00015: val_loss did not improve from 1.21119 Epoch 16/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1869 - acc: 0.7622 - val_loss: 1.2135 - val_acc: 0.7539

    Epoch 00016: val_loss did not improve from 1.21119 Epoch 17/100 278/278 [==============================] - 79s 286ms/step - loss: 1.1831 - acc: 0.7622 - val_loss: 1.2101 - val_acc: 0.7539

    Epoch 00017: val_loss improved from 1.21119 to 1.21015, saving model to ./model/pascal.hdf5 Epoch 18/100 278/278 [==============================] - 79s 286ms/step - loss: 1.1850 - acc: 0.7622 - val_loss: 1.2129 - val_acc: 0.7539

    Epoch 00018: val_loss did not improve from 1.21015 Epoch 19/100 278/278 [==============================] - 79s 286ms/step - loss: 1.1838 - acc: 0.7622 - val_loss: 1.2188 - val_acc: 0.7539

    Epoch 00019: val_loss did not improve from 1.21015 Epoch 20/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1822 - acc: 0.7622 - val_loss: 1.2079 - val_acc: 0.7539

    Epoch 00020: val_loss improved from 1.21015 to 1.20793, saving model to ./model/pascal.hdf5 Epoch 21/100 278/278 [==============================] - 79s 286ms/step - loss: 1.1835 - acc: 0.7622 - val_loss: 1.2083 - val_acc: 0.7539

    Epoch 00021: val_loss did not improve from 1.20793 Epoch 22/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1817 - acc: 0.7622 - val_loss: 1.2146 - val_acc: 0.7539

    Epoch 00022: val_loss did not improve from 1.20793 Epoch 23/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1836 - acc: 0.7622 - val_loss: 1.2028 - val_acc: 0.7539

    Epoch 00023: val_loss improved from 1.20793 to 1.20280, saving model to ./model/pascal.hdf5 Epoch 24/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1805 - acc: 0.7622 - val_loss: 1.2089 - val_acc: 0.7539

    Epoch 00024: val_loss did not improve from 1.20280 Epoch 25/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1816 - acc: 0.7622 - val_loss: 1.2148 - val_acc: 0.7539

    Epoch 00025: val_loss did not improve from 1.20280 Epoch 26/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1830 - acc: 0.7622 - val_loss: 1.2038 - val_acc: 0.7539

    Epoch 00026: val_loss did not improve from 1.20280 Epoch 27/100 278/278 [==============================] - 79s 286ms/step - loss: 1.1792 - acc: 0.7622 - val_loss: 1.2053 - val_acc: 0.7539

    Epoch 00027: val_loss did not improve from 1.20280 Epoch 28/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1846 - acc: 0.7622 - val_loss: 1.2032 - val_acc: 0.7539

    Epoch 00028: val_loss did not improve from 1.20280 Epoch 29/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1808 - acc: 0.7622 - val_loss: 1.2062 - val_acc: 0.7539

    Epoch 00029: val_loss did not improve from 1.20280 Epoch 30/100 278/278 [==============================] - 79s 285ms/step - loss: 1.1801 - acc: 0.7622 - val_loss: 1.2030 - val_acc: 0.7539

    Epoch 00030: val_loss did not improve from 1.20280

    Would you please tell me what's wrong and what should I do for the next.

    Looking forward to your reply.

    Thanks a lot

    opened by white2018 1
  • number of output classes

    number of output classes

    the num_ouput=21 , that is number of classes , "20 object class + 1 background". but in the voc 2011 is not any background sample image . then , should be num_output = 20 ?

    Thanks.

    opened by jeansely 1
  • how train other models

    how train other models

    Hi, for train my model ,i used your code (voc_generator.py , train.py, init_args.yml). but my output model has output shape (1,121,121,21), where should be change in the files, I've tested all possible states in 'init_args.yml'

    the error is ; Traceback (most recent call last): File "C:\cnn\inceptionV4\keras-fcn\keras-fcn-master\voc2011\train.py", line 121, in callbacks=[lr_reducer, early_stopper, csv_logger]) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1902, in fit_generator class_weight=class_weight) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1636, in train_on_batch check_batch_axis=True) File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 1315, in _standardize_user_data exception_prefix='target') File "C:\Program Files\python3.5\python-3.5.3.amd64\lib\site-packages\keras\engine\training.py", line 139, in _standardize_input_data str(array.shape)) ValueError: Error when checking target: expected decoder to have shape (1, 121, 121, 21) but got array with shape (1, 500, 500, 21)

    opened by jeansely 1
  • how fit other model with voc-generator?

    how fit other model with voc-generator?

    Hi, i want to use your files (train.py & voc-generator.py) in my model , but fcn model has num-output=21 that is number of classes , while my model output is a conv2d filter=1 . where should be change for train my model?

    also for run your code in python 3 and windows os should be change these lines: in train.py csv_logger = CSVLogger( 'output{}_fcn_vgg16.csv'.format(datetime.datetime.now().strftime("%Y%m%d-%H%M%S")))

    in voc_generator.py self.filenames = np.loadtxt(image_set, dtype=bytes, delimiter="\n").astype(str)

    opened by jeansely 1
  • Requesting License + input on upstream Keras Semantic Segmentation design

    Requesting License + input on upstream Keras Semantic Segmentation design

    I came across your repository and it looks like good work and for that reason I'm submitting this request.

    François Chollet, Keras' author, said he is interested in directly incorporating dense prediction/FCN into the Keras API, so I'm seeking suggestions/feedback/contributions at fchollet/keras#6538.

    Also, could you add a license so it is clear how this can be used? I suggest the MIT license which is the same as Keras, it is pretty simple and lets people use it as they would like:

    The MIT License (MIT)
    
    Copyright (c) <year> <copyright holders>
    
    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
    
    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
    
    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
    

    Thanks for your consideration!

    opened by ahundt 1
  • import error: No module named 'keras_fcn.backend'

    import error: No module named 'keras_fcn.backend'

    Import failed. I ran the code cell right after I finished the import statement on colab and got a ModuleNotFoundError. But I just installed it using pip3.

          1 import keras.backend as K
    ----> 2 import keras_fcn.backend as K1
          3 from keras.utils import conv_utils
          4 from keras.engine.topology import Layer
          5 from keras.engine import InputSpec
    
    ModuleNotFoundError: No module named 'keras_fcn.backend'
    
    ---------------------------------------------------------------------------
    NOTE: If your import is failing due to a missing package, you can
    manually install dependencies using either !pip or !apt.
    
    To view examples of installing some common dependencies, click the
    "Open Examples" button below.
    -----------------------------------------
    
    opened by jazli1999 0
  • U-Net: Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (1, 224, 224, 21)

    U-Net: Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (1, 224, 224, 21)

    Hi, I'm trying this out with the U-Net Architecture but i keep running into this error, I'm not sure what I might be doing wrong. This is what the definition of my model looks like:

    `def get_unet(self):

    	inputs = Input((self.img_rows, self.img_cols,3))
    	#print(inputs.shape)
    	conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
    	conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
    	pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
    	conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
    	conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
    	pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
    	conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
    	conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
    	pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
    	conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
    	conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
    	drop4 = Dropout(0.5)(conv4)
    	pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
    	conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
    	conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
    	drop5 = Dropout(0.5)(conv5)
    
    	up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
    	merge6 = concatenate([drop4,up6], axis = 3)
    	conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
    	conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
    
    	up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
    	merge7 = concatenate([conv3,up7], axis = 3)
    	conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
    	conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
    
    	up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
    	merge8 = concatenate([conv2,up8], axis = 3)
    	conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
    	conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
    
    	up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
    	merge9 = concatenate([conv1,up9], axis = 3)
    	conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
    	conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    	conv9 = Conv2D(21, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    	conv9 = Conv2D(21, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    	print(conv9.shape)
    
    	
    	reshape = Reshape((21,self.img_rows * self.img_cols))(conv9)
    	print(reshape.shape)
    
    	permute = Permute((2,1))(reshape)
    	print(permute.shape)
    
    	activation = Activation('softmax')(permute)
    	
    	print(activation.shape)
    	model = Model(input = inputs, output = activation)
    
    	return model`
    
    opened by pgadosey 1
  • Model always predicts the dominant class

    Model always predicts the dominant class

    Did not configure the model at all, simply ran

    > from keras_fcn import FCN
    > fcn_vgg19 = FCN_VGG19(input_shape=(500, 500, 3), classes=21,  
    >                       weights='imagenet', trainable_encoder=True)
    > fcn_vgg19.compile(optimizer='rmsprop',
    >                   loss='categorical_crossentropy',
    >                   metrics=['accuracy'])
    > fcn_vgg19.fit(X_train, y_train, batch_size=8, epochs=20)
    

    on the BDD dataset of 20 classes.

    input size: (batch_size, width, height, channels) output size: (batch_size, width, height, n_classes)

    Assuming data is correct, is the model for certain bug-free?

    opened by wangwalton 2
  • StopIteration

    StopIteration

    Hi ,thanks for your code! I have a problem. Can you help me ?

    Loading weights... Epoch 1/100 Traceback (most recent call last): File "/home/ilab/biyoner/sementic_seg/keras-fcn/voc2011/train.py", line 88, in callbacks=[early_stopper, csv_logger, checkpointer, nan_terminator]) File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/keras/engine/training.py", line 2115, in fit_generator generator_output = next(output_generator) File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/keras/utils/data_utils.py", line 557, in get six.raise_from(StopIteration(e), e) File "/home/ilab/biyoner/keras/local/lib/python2.7/site-packages/six.py", line 737, in raise_from raise value StopIteration

    Process finished with exit code 1

    Thanks.

    opened by biyoner 2
  • Some typos that worth mentioning

    Some typos that worth mentioning

    Hi JihongJu,

    Thanks for developing a wrapper for FCN models under Keras. My teammates and I find this repo really helpful to play with.

    Nonetheless, below are some issues that we've encountered. We have developed manual workarounds, but to save others' time in debugging (and modifying) the source codes, I would like to raise them here.

    1. THE FCN with VGG 19 example in readme.MD is not working. That is because FCN object is referring to FCN_VGG16 only, and FCN_VGG19 is not defined in the __init__.py file. One work around is to modify __init__.py so that it looks like the following:
    """fcn init."""
    
    from .models import (
        FCN,
        FCN_VGG16,
        FCN_VGG19
    )
    
    • Plus, in the models.py the description for FCN_VGG19 is wrong. Currently it reads as the following:
    def FCN_VGG19(input_shape, classes, weight_decay=0,
                  trainable_encoder=True, weights='imagenet'):
        """Fully Convolutional Networks for semantic segmentation with VGG16.
    

    But it is indeed for VGG19.

    1. In order to load the pre-trained weights, the package would automatically download them if they're not found under the .keras/models folder. This is implemented under the encoders.py file. But the following line is wrong. It should be looking for '{}_weights_tf_dim_ordering_tf_kernels_notop.h5' instead:
    # load pre-trained weights
            if weights is not None:
                weights_path = get_file(
                     '{}_weights_tf_dim_ordering_tf_kernels.h5'.format(name),
                     weights,
                     cache_subdir='models')
    

    Please review. Thanks!

    opened by mekomlusa 0
Owner
JihongJu
🤓
JihongJu
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

pytorch-fcn PyTorch implementation of Fully Convolutional Networks. Requirements pytorch >= 0.2.0 torchvision >= 0.1.8 fcn >= 6.1.5 Pillow scipy tqdm

Kentaro Wada 1.6k Jan 7, 2023
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

F-Clip — Fully Convolutional Line Parsing This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang

Xili Dai 115 Dec 28, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

null 472 Dec 22, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

null 111 Dec 27, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

null 39 Aug 2, 2021
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network

Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network This repository is the official implementation of Speech Separati

Kai Li (李凯) 116 Nov 9, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Yam Peleg 63 Sep 21, 2022
This project is a loose implementation of paper "Algorithmic Financial Trading with Deep Convolutional Neural Networks: Time Series to Image Conversion Approach"

Stock Market Buy/Sell/Hold prediction Using convolutional Neural Network This repo is an attempt to implement the research paper titled "Algorithmic F

Asutosh Nayak 136 Dec 28, 2022
Pytorch implementation of AngularGrad: A New Optimization Technique for Angular Convergence of Convolutional Neural Networks

AngularGrad Optimizer This repository contains the oficial implementation for AngularGrad: A New Optimization Technique for Angular Convergence of Con

mario 124 Sep 16, 2022
Unofficial TensorFlow implementation of Protein Interface Prediction using Graph Convolutional Networks.

[TensorFlow] Protein Interface Prediction using Graph Convolutional Networks Unofficial TensorFlow implementation of Protein Interface Prediction usin

YeongHyeon Park 9 Oct 25, 2022