FairyTailor: Multimodal Generative Framework for Storytelling

Overview

FairyTailor: Multimodal Generative Framework for Storytelling

Human-in-the-loop visual story co-creation.

Users can create a cohesive children's story by weaving generated texts and retrieved images with their input. With co-creation, writers contribute their creative thinking, while generative models contribute to their constant workflow. FairyTailor adds another modality and modifies the text generation process to help producing a coherent and creative story.

Architecture

Set-up (development)

After cloning the repository:

Client (Vue 2.6)

Install and check that the client compiles:

cd client
npm i
npm run build

Backend (FASTAPI)

Install and activate the environment (conda provided):

conda env create -f environment.yml
conda activate MultiModalStory

Install environment globally in the directory:

pip install -e .
pip install git+https://github.com/openai/CLIP.git

After installation run:

python -m spacy download en_core_web_sm

In python terminal:

nltk.download('wordnet')
nltk.download('sentiwordnet')
nltk.download('averaged_perceptron_tagger')

Large Data Management (dvc)

Our large data files are stored on IBM's Cloud Object Storage, and to pull data files from that platform you will use a special, read-only .dvc/config file.

dvc pull -f

Which will pull:

  • backend/outputs (five preset stories)
  • backend/story_generator/downloaded (transformers)
  • client/public/unsplash25k (styled images)

Running the framework during developemnt

Client:

cd client
npm run devw

Backend (with server auto reload):

uvicorn backend.server:app --reload --reload-dir backend

Open the uvicorn server localhost:8000 in your web browser

Modifications Ideas:

New huggingface transformer

  • Place the transformer in backend/story_generator/downloaded directory.
  • Update the current model path by changing the constant FINETUNED_GPT2_PATH in backend/story_generator/constants.py.

New images folder

  • Replace the folder client/public/unsplash25k/sketch_images1024 with yours.
  • Update the current path by changing the constant IMAGE_PATH in client/src/components/Constants.js.

API functionalities

  • Add functions to the backend endpoint at backend/server/main.py.
  • Update client/src/js/api/mainApi.js to call the backend endpoint from the client.
  • Update the corresponding user components in client/src/components.
You might also like...
 Rethinking the U-Net architecture for multimodal biomedical image segmentation
Rethinking the U-Net architecture for multimodal biomedical image segmentation

MultiResUNet Rethinking the U-Net architecture for multimodal biomedical image segmentation This repository contains the original implementation of "M

Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

Code and datasets for the paper
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

MERLOT: Multimodal Neural Script Knowledge Models
MERLOT: Multimodal Neural Script Knowledge Models

merlot MERLOT: Multimodal Neural Script Knowledge Models MERLOT is a model for learning what we are calling "neural script knowledge" -- representatio

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions.
Preprocessed Datasets for our Multimodal NER paper

Unified Multimodal Transformer (UMT) for Multimodal Named Entity Recognition (MNER) Two MNER Datasets and Codes for our ACL'2020 paper: Improving Mult

Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations
Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

The Second Situated Interactive MultiModal Conversations (SIMMC 2.0) Challenge 2021 Welcome to the Second Situated Interactive Multimodal Conversation

multimodal transformer
multimodal transformer

This repo holds the code to perform experiments with the multimodal autoregressive probabilistic model Transflower. Overview of the repo It is structu

Comments
  • Error pulling from dvc

    Error pulling from dvc

    When I run dvc pull -f I get the following output:

    WARNING: No file hash info found for 'backend/unsplash-dataset'. It won't be created.
    WARNING: No file hash info found for 'backend/story_generator/downloaded'. It won't be created.
    WARNING: No file hash info found for 'backend/outputs'. It won't be created.
    3 files failed
    ERROR: failed to pull data from the cloud - Checkout failed for following targets:
    backend/unsplash-dataset
    backend/story_generator/downloaded
    backend/outputs
    Is your cache up to date?
    <https://error.dvc.org/missing-files>
    
    opened by michael-knight 2
  • .dvc/config file missing

    .dvc/config file missing

    After installation, the next step is to pull all the files: Our large data files are stored on IBM's Cloud Object Storage, and to pull data files from that platform you will use a special, read-only .dvc/config file.

    But after the following command dvc pull -f

    I get the following: image

    Does it mean there is something missing in the repository?

    opened by timeeehd 1
  • fairytailor.org is down?

    fairytailor.org is down?

    opened by sterlinm 1
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

CAiRE 42 Jan 7, 2023
A Comparative Framework for Multimodal Recommender Systems

Cornac Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxilia

Preferred.AI 671 Jan 3, 2023
Framework for joint representation learning, evaluation through multimodal registration and comparison with image translation based approaches

CoMIR: Contrastive Multimodal Image Representation for Registration Framework ?? Registration of images in different modalities with Deep Learning ??

Methods for Image Data Analysis - MIDA 55 Dec 9, 2022
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 4, 2023
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is performance optimized with end-to-end on device inference in mind.

Bhavishya Pandit 3 Sep 30, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
Deep Multimodal Neural Architecture Search

MMNas: Deep Multimodal Neural Architecture Search This repository corresponds to the PyTorch implementation of the MMnas for visual question answering

Vision and Language Group@ MIL 23 Dec 21, 2022