HAR-stacked-residual-bidir-LSTMs - Deep stacked residual bidirectional LSTMs for HAR

Overview

HAR-stacked-residual-bidir-LSTM

The project is based on this repository which is presented as a tutorial. It consists of Human Activity Recognition (HAR) using stacked residual bidirectional-LSTM cells (RNN) with TensorFlow.

It resembles to the architecture used in "Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" without an attention mechanism and with just the encoder part. In fact, we started coding while thinking about applying residual connections to LSTMs - and it is only afterwards that we saw that such a deep LSTM architecture was already being used.

Here, we improve accuracy on the previously used dataset from 91% to 94% and we push the subject further by trying our architecture on another dataset.

Our neural network has been coded to be easy to adapt to new datasets (assuming it is given a fixed, non-dynamic, window of signal for every prediction) and to use different breadth, depth and length by using a new configuration file.

Here is a simplified overview of our architecture:

Simplified view of a "2x2" architecture. We obtain best results with a "3x3" architecture (details below figure).

Bear in mind that the time steps expands to the left for the whole sequence length and that this architecture example is what we call a "2x2" architecture: 2 residual cells as a block stacked 2 times for a total of 4 bidirectional cells, which is in reality 8 unidirectional LSTM cells. We obtain best results with a 3x3 architecture, consisting of 18 LSTM cells.

Neural network's architecture

Mainly, the number of stacked and residual layers can be parametrized easily as well as whether or not bidirectional LSTM cells are to be used. Input data needs to be windowed to an array with one more dimension: the training and testing is never done on full signal lengths and use shuffling with resets of the hidden cells' states.

We are using a deep neural network with stacked LSTM cells as well as residual (highway) LSTM cells for every stacked layer, a little bit like in ResNet, but for RNNs.

Our LSTM cells are also bidirectional in term of how they pass trough the time axis, but differ from classic bidirectional LSTMs by the fact we concatenate their output features rather than adding them in an element-wise fashion. A simple hidden ReLU layer then lowers the dimension of those concatenated features for sending them to the next stacked layer. Bidirectionality can be disabled easily.

Setup

We used TensorFlow 0.11 and Python 2. Sklearn is also used.

The two datasets can be loaded by running python download_datasets.py in the data/ folder.

To preprocess the second dataset (opportunity challenge dataset), the signal submodule of scipy is needed, as well as pandas.

Results using the previous public domain HAR dataset

This dataset named A Public Domain Dataset for Human Activity Recognition Using Smartphones is about classifying the type of movement amongst six categories: (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING).

The bests results for a test accuracy of 94% are achieved with the 3x3 bidirectional architecture with a learning rate of 0.001 and an L2 regularization multiplier (weight decay) of 0.005, as seen in the 3x3_result_HAR_6.txt file.

Training and testing can be launched by running the config: python config_dataset_HAR_6_classes.py.

Results from the Opportunity dataset

The neural network has also been tried on the Opportunity dataset to see if the architecture could be easily adapted to a similar task.

Don't miss out this nice video that offers a nice overview and understanding of the dataset.

We obtain a test F1-score of 0.893. Our results can be compared to the state of the art DeepConvLSTM that is used on the same dataset and achieving a test F1-score of 0.9157.

We only used a subset of the full dataset as done in other research in order to simulate the conditions of the competition, using 113 sensor channels and classifying on the 17 categories output (and with the NULL class for a total of 18 classes). The windowing of the series for feeding in our neural network is also the same 24 time steps per classification, on a 30 Hz signal. However, we observed that there was no significant difference between using 128 time steps or 24 time steps (0.891 vs 0.893 F1-score). Our LSTM cells' inner representation is always reset to 0 between series. We also used mean and standard deviation normalization rather than min to max rescaling to rescale features to a zero mean and a standard deviation of 0.5. More details about preprocessing are explained furthermore in their paper. Other details, such as the fact that the classification output is sampled only at the last timestep for the training of the neural network, can be found in their preprocessing script that we adapted in our repository.

The config file can be runned like this: config_dataset_opportunity_18_classes.py. For best results, it is possible to readjust the learning rate such as in the 3x3_result_opportunity_18.txt file.

Citation

The paper is available on arXiv: https://arxiv.org/abs/1708.08989

Here is the BibTeX citation code:

@article{DBLP:journals/corr/abs-1708-08989,
  author    = {Yu Zhao and
               Rennong Yang and
               Guillaume Chevalier and
               Maoguo Gong},
  title     = {Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable
               Sensors},
  journal   = {CoRR},
  volume    = {abs/1708.08989},
  year      = {2017},
  url       = {http://arxiv.org/abs/1708.08989},
  archivePrefix = {arXiv},
  eprint    = {1708.08989},
  timestamp = {Mon, 13 Aug 2018 16:46:48 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1708-08989},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Collaborate with us on similar research projects

Join the slack workspace for time series processing, where you can:

  • Collaborate with us and other researchers on writing more time series processing papers, in the #research channel;
  • Do business with us and other companies for services and products related to time series processing, in the #business channel;
  • Talk about how to do Clean Machine Learning using Neuraxle, in the #neuraxle channel;

Online Course: Learn Deep Learning and Recurrent Neural Networks (DL&RNN)

We have created a course on Deep Learning and Recurrent Neural Networks (DL&RNN). Request an access to the course here. That is the most richly dense and accelerated course out there on this precise topic of DL&RNN.

We've also created another course on how to do Clean Machine Learning with the right design patterns and the right software architecture for your code to evolve correctly to be useable in production environments.

Comments
  • Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32

    Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32

    Hi , your code helped me a lot. I want to appreciate about it.

    But when I try to run python config_dataset_HAR_6_classes.py, error occurs.

    I attach the environment and log below.

    Tensorflow version 1.1 OS : Ubuntu 16.04


    test@test:~/HAR-stacked-residual-bidir-LSTMs$ python config_dataset_HAR_6_classes.py learning_rate: 0.001 lambda_loss_amount: 0.005

    Some useful info to get an insight on dataset's shape and normalisation: features shape, labels shape, each features mean, each features standard deviation ((2947, 128, 9), (2947, 6), 0.099139921, 0.39567086) the dataset is therefore properly normalised, as expected. (128, ?, 9) (?, 9) Traceback (most recent call last): File "config_dataset_HAR_6_classes.py", line 195, in run_with_config(EditedConfig, X_train, y_train, X_test, y_test) File "/home/jungi/HAR-stacked-residual-bidir-LSTMs/lstm_architecture.py", line 270, in run_with_config pred_y = LSTM_network(X, config, keep_prob_for_dropout) File "/home/jungi/HAR-stacked-residual-bidir-LSTMs/lstm_architecture.py", line 207, in LSTM_network hidden = tf.split(0, config.n_steps, feature_mat) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1198, in split split_dim=axis, num_split=num_or_size_splits, value=value, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 3306, in _split num_split=num_split, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 514, in apply_op (prefix, dtypes.as_dtype(input_arg.type).name)) TypeError: Input 'split_dim' of 'Split' Op has type float32 that does not match expected type of int32.

    opened by 2jungi 4
  • Impressed work

    Impressed work

    I'm wondering if such RNN network could be used to extract features from the time series data. If so, it will save a lot of work on feature engineering.

    opened by paulcx 3
  • question about tf.get_variable()

    question about tf.get_variable()

    Hi, I consult some materials about tf.Variable() and tf.get_variable(), but still confuse of them. For example, the function linear() in line 82-104, why define W = tf.get_variable() instead of defining 'W=tf.Variable()' in class Config? Thank you.

    opened by zhaoyu611 3
  • Hello friends

    Hello friends

    Hello friend, I do the same experiments these days. The state-of-art model makes a good result of F1 score 91 %, I run his code and it is 91% but when I switched to use keras 2 with tf as backend and I could only got f1 score of 88% at most. What is your result using tf ?

    opened by gladuo 1
  • Error when use the Bidirectional cells

    Error when use the Bidirectional cells

    Fix by layer_hidden_outputs = [ tf.concat([f, b],len(f.get_shape()) - 1 ) for f, b in zip(forward, backward)] I got this error when enable the bidirectional cells, any suggestions?

    learning_rate: 0.001 lambda_loss_amount: 0.005 clip_gradients: 5.0

    Some useful info to get an insight on dataset's shape and normalisation: features shape, labels shape, each features mean, each features standard deviation (2947, 128, 9) (2947, 6) 0.0991399 0.395671 the dataset is therefore properly normalised, as expected. (128, ?, 9) (?, 9) 128 (?, 9)

    Creating hidden #1: bidir: 128 (?, 14) Traceback (most recent call last): File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/config_dataset_HAR_6_classes.py", line 196, in run_with_config(EditedConfig, X_train, y_train, X_test, y_test) File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/lstm_architecture.py", line 267, in run_with_config pred_y = LSTM_network(X, config, keep_prob_for_dropout) File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/lstm_architecture.py", line 212, in LSTM_network hidden = residual_bidirectional_LSTM_layers(hidden, config.n_inputs, config.n_hidden, 1, config, keep_prob_for_dropout) File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/lstm_architecture.py", line 165, in residual_bidirectional_LSTM_layers hidden_LSTM_layer = get_lstm(input_hidden_tensor) File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/lstm_architecture.py", line 159, in get_lstm = lambda input_tensor: bi_LSTM_cell(input_tensor, n_input, n_output, config) File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/lstm_architecture.py", line 140, in bi_LSTM_cell for f, b in zip(forward, backward)] File "/Users/zhaowenichi/Downloads/HAR-stacked-residual-bidir-LSTMs-master/lstm_architecture.py", line 140, in for f, b in zip(forward, backward)] File "/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1047, in concat dtype=dtypes.int32).get_shape( File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor as_ref=False) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto _AssertCompatible(values, dtype) File "/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).name)) TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

    opened by zhaowenyi94 0
  • Improved training behavior

    Improved training behavior

    • Mean and variance normalization rather than a custom min and max (because of this the preprocessing script needs to be run again)
    • More training data with a smaller window step but not a bigger window, (but not more test data)
    • L2 weights regularization now does not affect biases nor batch norm layers
    • Batch norm fixed: before there were different params for each time steps
    • Added boosting during training to retrain on the 5 worst batches (in term of amount of loss) at the end of each iteration
    • Some minor changes to the hyperparameters have also been made.

    I also tried to do filtering on the accelerometers to extract a new "gravity" component from a 0.3 Hz LP Butterworth filter, but the results were similar so I have undone that, despite the code for that still exists and can be changed by flipping a boolean in the preprocessing script of the data folder.

    opened by guillaume-chevalier 0
  • Architecture to be optimised

    Architecture to be optimised

    Implemented stacked residual bidirectional LSTM cells. The code is to be reviewed and optimized, I did not touch much to the hyperparameters. It blew my 2 GB GPU's RAM so testing this code was hard. Hope you can do well on your machine!

    opened by guillaume-chevalier 0
  • sliding window error

    sliding window error

    Loading data... is fine ..from file data/oppChallenge_gestures.data ..reading instances: train (557963, 113), test (118750, 113)

    Until I call a sliding window function, error is shown Sensor data is segmented using a sliding window mechanism X_test, y_test = opp_sliding_window(X_test, y_test, SLIDING_WINDOW_LENGTH, SLIDING_WINDOW_STEP_SHORT) X_train, y_train = opp_sliding_window(X_train, y_train, SLIDING_WINDOW_LENGTH, SLIDING_WINDOW_STEP)


    TypeError Traceback (most recent call last) in () 1 # Sensor data is segmented using a sliding window mechanism ----> 2 X_test, y_test = opp_sliding_window(X_test, y_test, SLIDING_WINDOW_LENGTH, SLIDING_WINDOW_STEP_SHORT) 3 X_train, y_train = opp_sliding_window(X_train, y_train, SLIDING_WINDOW_LENGTH, SLIDING_WINDOW_STEP)

    1 frames /content/drive/My Drive/sensors2020/sliding_window.py in sliding_window(a, ws, ss, flatten) 93 # remove any dimensions with size 1 94 dim = filter(lambda i : i != 1,dim) ---> 95 return strided.reshape(dim)

    TypeError: expected sequence object with len >= 0 or a single integer

    opened by nattafahhm 0
  • Hi~ question about add_highway_redisual in lstm_architecture.py

    Hi~ question about add_highway_redisual in lstm_architecture.py

    Hello , your code seems to be a little different from your model figure. code as follows: for i in range(config.n_layers_in_highway - 1): with tf.variable_scope('LSTM_residual_{}'.format(i)) as scope2: hidden_LSTM_layer = add_highway_redisual( hidden_LSTM_layer, get_lstm(input_hidden_tensor)# why is the input_hidden_tensor ) but in your figure , it seems like that the output of the first biLSTM is added to the output of the second biLSTM in one Residual Layer. Did I make a mistake? Thank you!

    opened by chaojidaxingxin 0
Owner
Guillaume Chevalier
e^(πi) + 1 = 0
Guillaume Chevalier
[CVPR2021 Oral] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation.

FFB6D This is the official source code for the CVPR2021 Oral work, FFB6D: A Full Flow Biderectional Fusion Network for 6D Pose Estimation. (Arxiv) Tab

Yisheng (Ethan) He 201 Dec 28, 2022
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition The official code of ABINet (CVPR 2021, Oral).

null 334 Dec 31, 2022
Implementation of "Bidirectional Projection Network for Cross Dimension Scene Understanding" CVPR 2021 (Oral)

Bidirectional Projection Network for Cross Dimension Scene Understanding CVPR 2021 (Oral) [ Project Webpage ] [ arXiv ] [ Video ] Existing segmentatio

Hu Wenbo 135 Dec 26, 2022
Wind Speed Prediction using LSTMs in PyTorch

Implementation of Deep-Forecast using PyTorch Deep Forecast: Deep Learning-based Spatio-Temporal Forecasting Adapted from original implementation Setu

Onur Kaplan 151 Dec 14, 2022
[ACM MM 2021] Diverse Image Inpainting with Bidirectional and Autoregressive Transformers

Diverse Image Inpainting with Bidirectional and Autoregressive Transformers Installation pip install -r requirements.txt Dataset Preparation Given the

Yingchen Yu 25 Nov 9, 2022
LSTMs (Long Short Term Memory) RNN for prediction of price trends

Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L

null 5 Nov 12, 2021
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

null 1 Oct 27, 2021
L-Verse: Bidirectional Generation Between Image and Text

Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for many vision tasks with their power and scalabilty

Kim, Taehoon 102 Dec 21, 2022
UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss This repository contains the TensorFlow implementation of the paper UnF

Simon Meister 270 Nov 6, 2022
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of. Stacked ensembles are simpl

Reiichiro Nakano 1.3k Nov 17, 2022
Stacked Recurrent Hourglass Network for Stereo Matching

SRH-Net: Stacked Recurrent Hourglass Introduction This repository is supplementary material of our RA-L submission, which helps reviewers to understan

null 28 Jan 3, 2023
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Han Zhang 809 Dec 16, 2022
Stacked Hourglass Network with a Multi-level Attention Mechanism: Where to Look for Intervertebral Disc Labeling

⚠️ ‎‎‎ A more recent and actively-maintained version of this code is available in ivadomed Stacked Hourglass Network with a Multi-level Attention Mech

Reza Azad 14 Oct 24, 2022
Implements Stacked-RNN in numpy and torch with manual forward and backward functions

Recurrent Neural Networks Implements simple recurrent network and a stacked recurrent network in numpy and torch respectively. Both flavours implement

Vishal R 1 Nov 16, 2021
Pytorch implementation of Deep Recursive Residual Network for Super Resolution (DRRN)

DRRN-pytorch This is an unofficial implementation of "Deep Recursive Residual Network for Super Resolution (DRRN)", CVPR 2017 in Pytorch. [Paper] You

yun_yang 192 Dec 12, 2022
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks) This repository contains a PyTorch implementation for the paper: Deep Pyra

Greg Dongyoon Han 262 Jan 3, 2023
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022