Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Overview

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

The task of age transformation illustrates the change of an individual's appearance over time. Accurately modeling this complex transformation over an input facial image is extremely challenging as it requires making convincing and possibly large changes to facial features and head shape, while still preserving the input identity. In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. We employ a pre-trained age regression network used to explicitly guide the encoder to generate the latent codes corresponding to the desired age. In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control on the generated image. Moreover, unlike other approaches that operate solely in the latent space using a prior on the path controlling age, our method learns a more disentangled, non-linear path. We demonstrate that the end-to-end nature of our approach, coupled with the rich semantic latent space of StyleGAN, allows for further editing of the generated images. Qualitative and quantitative evaluations show the advantages of our method compared to state-of-the-art approaches.

Description

Official Implementation of our Style-based Age Manipulation (SAM) paper for both training and evaluation. SAM allows modeling fine-grained age transformation using a single input facial image

Getting Started

Prerequisites

  • Linux or macOS
  • NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
  • Python 3

Installation

  • Dependencies:
    We recommend running this repository using Anaconda. All dependencies for defining the environment are provided in environment/sam_env.yaml.

Pretrained Models

Please download the pretrained aging model from the following links.

Path Description
SAM SAM trained on the FFHQ dataset for age transformation.

In addition, we provide various auxiliary models needed for training your own SAM model from scratch.
This includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier used to compute the aging loss during training.

Path Description
pSp Encoder pSp taken from pixel2style2pixel trained on the FFHQ dataset for StyleGAN inversion.
FFHQ StyleGAN StyleGAN model pretrained on FFHQ taken from rosinality with 1024x1024 output resolution.
IR-SE50 Model Pretrained IR-SE50 model taken from TreB1eN for use in our ID loss during training.
VGG Age Classifier VGG age classifier from DEX and fine-tuned on the FFHQ-Aging dataset for use in our aging loss

By default, we assume that all auxiliary models are downloaded and saved to the directory pretrained_models. However, you may use your own paths by changing the necessary values in configs/path_configs.py.

Training

Preparing your Data

Please refer to configs/paths_config.py to define the necessary data paths and model paths for training and inference.
Then, refer to configs/data_configs.py to define the source/target data paths for the train and test sets as well as the transforms to be used for training and inference.

As an example, we can first go to configs/paths_config.py and define:

dataset_paths = {
    'ffhq': '/path/to/ffhq/images256x256'
    'celeba_test': '/path/to/CelebAMask-HQ/test_img',
}

Then, in configs/data_configs.py, we define:

DATASETS = {
	'ffhq_aging': {
		'transforms': transforms_config.AgingTransforms,
		'train_source_root': dataset_paths['ffhq'],
		'train_target_root': dataset_paths['ffhq'],
		'test_source_root': dataset_paths['celeba_test'],
		'test_target_root': dataset_paths['celeba_test'],
	}
}

When defining the datasets for training and inference, we will use the values defined in the above dictionary.

Training SAM

The main training script can be found in scripts/train.py.
Intermediate training results are saved to opts.exp_dir. This includes checkpoints, train outputs, and test outputs.
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in opts.exp_dir/logs.

Training SAM with the settings used in the paper can be done by running the following command:

python scripts/train.py \
--dataset_type=ffhq_aging \
--exp_dir=/path/to/experiment \
--workers=6 \
--batch_size=6 \
--test_batch_size=6 \
--test_workers=6 \
--val_interval=2500 \
--save_interval=10000 \
--start_from_encoded_w_plus \
--id_lambda=0.1 \
--lpips_lambda=0.1 \
--lpips_lambda_aging=0.1 \
--lpips_lambda_crop=0.6 \
--l2_lambda=0.25 \
--l2_lambda_aging=0.25 \
--l2_lambda_crop=1 \
--w_norm_lambda=0.005 \
--aging_lambda=5 \
--cycle_lambda=1 \
--input_nc=4 \
--target_age=uniform_random \
--use_weighted_id_loss

Additional Notes

  • See options/train_options.py for all training-specific flags.
  • Note that using the flag --start_from_encoded_w_plus requires you to specify the path to the pretrained pSp encoder.
    By default, this path is taken from configs.paths_config.model_paths['pretrained_psp'].
  • If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using --checkpoint_path.

Notebooks

Inference Notebook

To help visualize the results of SAM we provide a Jupyter notebook found in notebooks/inference_playground.ipynb.
The notebook will download the pretrained aging model and run inference on the images found in notebooks/images.

MP4 Notebook

To show full lifespan results using SAM we provide an additional notebook notebooks/animation_inference_playground.ipynb that will run aging on multiple ages between 0 and 100 and interpolate between the results to display full aging. The results will be saved as an MP4 files in notebooks/animations showing the aging and de-aging results.

Testing

Inference

Having trained your model or if you're using a pretrained SAM model, you can use scripts/inference.py to run inference on a set of images.
For example,

python scripts/inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--couple_outputs
--target_age=0,10,20,30,40,50,60,70,80

Additional notes to consider:

  • During inference, the options used during training are loaded from the saved checkpoint and are then updated using the test options passed to the inference script.
  • Adding the flag --couple_outputs will save an additional image containing the input and output images side-by-side in the sub-directory inference_coupled. Otherwise, only the output image is saved to the sub-directory inference_results.
  • In the above example, we will run age transformation with target ages 0,10,...,80.
    • The results of each target age are saved to the sub-directories inference_results/TARGET_AGE and inference_coupled/TARGET_AGE.
  • By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.
    • If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag --resize_outputs.

Side-by-Side Inference

The above inference script will save each aging result in a different sub-directory for each target age. Sometimes, however, it is more convenient to save all aging results of a given input side-by-side like the following:

To do so, we provide a script inference_side_by_side.py that works in a similar manner as the regular inference script:

python scripts/inference_side_by_side.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--target_age=0,10,20,30,40,50,60,70,80

Here, all aging results 0,10,...,80 will be save side-by-side with the original input image.

Reference-Guided Inference

In the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image to control global features such as hair color. For example,

To perform style mixing using reference images, we provide the script reference_guided_inference.py. Here, we first perform aging using the specified target age(s). Then, style mixing is performed using the specified reference images and the specified layers. For example, one can run:

python scripts/reference_guided_inference.py \
--exp_dir=/path/to/experiment \
--checkpoint_path=experiment/checkpoints/best_model.pt \
--data_path=/path/to/test_data \
--test_batch_size=4 \
--test_workers=4 \
--ref_images_paths_file=/path/to/ref_list.txt \
--latent_mask=8,9 \
--target_age=50,60,70,80

Here, the reference images should be specified in the file defined by --ref_images_paths_file and should have the following format:

/path/to/reference/1.jpg
/path/to/reference/2.jpg
/path/to/reference/3.jpg
/path/to/reference/4.jpg
/path/to/reference/5.jpg

In the above example, we will aging using 4 different target ages. For each target age, we first transform the test samples defined by --data_path and then perform style mixing on layers 8,9 defined by --latent_mask. The results of each target age are saved in its own sub-directory.

Style Mixing

Instead of performing style mixing using a reference image, you can perform style mixing using randomly generated w latent vectors by running the script style_mixing.py. This script works in a similar manner to the reference guided inference except you do not need to specify the --ref_images_paths_file flag.

Repository structure

Path Description
SAM Repository root folder
├  configs Folder containing configs defining model/data paths and data transforms
├  criteria Folder containing various loss criterias for training
├  datasets Folder with various dataset objects and augmentations
├  docs Folder containing images displayed in the README
├  environment Folder containing Anaconda environment used in our experiments
├ models Folder containing all the models and training objects
│  ├  encoders Folder containing various architecture implementations
│  ├  stylegan2 StyleGAN2 model from rosinality
│  ├  psp.py Implementation of pSp encoder
│  └  dex_vgg.py Implementation of DEX VGG classifier used in computation of aging loss
├  notebook Folder with jupyter notebook containing SAM inference playground
├  options Folder with training and test command-line options
├  scripts Folder with running scripts for training and inference
├  training Folder with main training logic and Ranger implementation from lessw2020
├  utils Folder with various utility functions

Credits

StyleGAN2 model and implementation:
https://github.com/rosinality/stylegan2-pytorch
Copyright (c) 2019 Kim Seonghyeon
License (MIT) https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE

IR-SE50 model and implementations:
https://github.com/TreB1eN/InsightFace_Pytorch
Copyright (c) 2018 TreB1eN
License (MIT) https://github.com/TreB1eN/InsightFace_Pytorch/blob/master/LICENSE

Ranger optimizer implementation:
https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
License (Apache License 2.0) https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer/blob/master/LICENSE

LPIPS model and implementation:
https://github.com/S-aiueo32/lpips-pytorch
Copyright (c) 2020, Sou Uchida
License (BSD 2-Clause) https://github.com/S-aiueo32/lpips-pytorch/blob/master/LICENSE

DEX VGG model and implementation:
https://github.com/InterDigitalInc/HRFAE
Copyright (c) 2020, InterDigital R&D France
https://github.com/InterDigitalInc/HRFAE/blob/master/LICENSE.txt

pSp model and implementation:
https://github.com/eladrich/pixel2style2pixel
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
https://github.com/eladrich/pixel2style2pixel/blob/master/LICENSE

Acknowledgments

This code borrows heavily from pixel2style2pixel

Citation

If you use this code for your research, please cite our paper Only a Matter of Style: Age Transformation Using a Style-Based Regression Model:

@misc{alaluf2021matter,
      title={Only a Matter of Style: Age Transformation Using a Style-Based Regression Model}, 
      author={Yuval Alaluf and Or Patashnik and Daniel Cohen-Or},
      year={2021},
      eprint={2102.02754},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • ImportError: No module named 'fused'

    ImportError: No module named 'fused'

    Hi, I am trying to setup this repo on my own local machine but I am getting this error. I searched on internet but couldn't find a single solution of this. Any help will be appreciated. Thanks

    ImportError: No module named 'fused'

    opened by HasnainKhanNiazi 24
  • Purpose of parameters for inference.py script

    Purpose of parameters for inference.py script

    Could you please guide us what is the purpose of following two parameters? I check details of Readme context but could not find relevant.

    --test_batch_size=4
    --test_workers=4 \

    Also, how can we improve quality of old face effects?

    opened by MetiCodePrivateLimited 9
  • Question about the faces generated at the early steps of training

    Question about the faces generated at the early steps of training

    Thank you very much for opening source such a cool project, I'm very interested in it! :cool:

    Then I used the FFHQ512×512 dataset to run the training again (use your pretrained SAM model to initialize the network weight), and used Adam optimizer with learning rate set the to 0.0001.

    However, the faces generated during the early 15,000 steps are similar to a female, no matter the input faces are male or female, the face is round or square. Later steps, the generating faces seem to more like the input faces.

    In the early:

    image image

    Later:

    image

    What's the reason? At first I even thought there was problem reading the input images in the code. Is it the reason about styleGAN2 decoder or the pSp encoder?

    Thanks.

    opened by S-HuaBomb 7
  • Model do not works well for Asian faces.

    Model do not works well for Asian faces.

    As the title mentioned, I use the pretrained model on Asian faces, the faces after transformed, color of their skins become white, I guess this maybe caused by data imbalance. Do you have any suggestions if I want to retrain the model on dataset mostly composed with Asian faces.

    Should I retrain the model of pSp Encoder and FFHQ StyleGAN too? Thanks

    opened by stereomatchingkiss 7
  • Cuda out of memory

    Cuda out of memory

    I am getting following error if I run train.py.

    RuntimeError: CUDA out of memory. Tried to allocate 54.00 MiB (GPU 0; 7.93 GiB total capacity; 6.86 GiB already allocated; 18.44 MiB free; 7.31 GiB reserved in total by PyTorch)

    Also, if I run following commands the I get output: import torch print(torch.rand(1, device="cuda"))

    output: tensor([0.4547], device='cuda:0')

    Could you please help me to fix it?

    opened by MetiCodePrivateLimited 7
  • Use another generator pretrained weight

    Use another generator pretrained weight

    Hi sir!

    I have a checkpoint file .pkl and then I use Rosinality to convert it into .pt. So I have some confusion:

    1. My question is could I use this model for inference by SAM? I knew that when training you saved stylegan-ffhq-config-f.pt into checkpoint sam.pt.

    2. Could I load the sam.pt model and change the state_dict of stylegan-ffhq-config-f path to my stylegan model? This idea comes from the fact that you freeze your generator when training, so that the weight of generator does not affect the final SAM weight for inference.

    3. Or should I retrain the SAM model with my own generator model?

    4. Or should I retrain styleGAN encode in pSp repo?

    Thank you so much sir.

    opened by duongquangvinh 5
  • Questions about target_ages.

    Questions about target_ages.

    First of all I want to thank you for the great project. The results were very impressive! However, when I read the code I felt that something was not like what was described in the paper. I hope you can explain it to me. Thank you!

    1. In the following line of code: https://github.com/yuval-alaluf/SAM/blob/fb6699845bd50e9b6bf8520112c6a746456128f4/training/coach_aging.py#L266 You calculate the cosine weight based on target_ages. But I think cosine weight should be calculated based on abs(source ages - target_ages). Is that correct?

    2. In the following line of code: https://github.com/yuval-alaluf/SAM/blob/fb6699845bd50e9b6bf8520112c6a746456128f4/training/coach_aging.py#L114 y_hat_inverse is generated by concatenating y_hat_clone and the age that the age predictor predicts for y_hat_clone. However, according to paper, y_hat_inverse should be made by concatenating y_hat_clone and source ages (of the original image). Is that correct?

    opened by datbu178 5
  • [Error] [Win] INFO: Could not find files for the given pattern(s).

    [Error] [Win] INFO: Could not find files for the given pattern(s).

    I ran scripts/inference.py with --exp_dir=output/ --checkpoint_path=saved_model/best_model.pt --data_path=input/ --test_batch_size=1 --test_workers=1 --target_age=0,10,20,30,40,50,60,70,80 arguments and got an error. Can you please help me resolve it?

    dependency versions that I'm using on windows: cmake==3.22.4 dlib==19.24.0 ninja==1.10.2.3 numpy==1.21.6 scipy==1.7.3 torch==1.9.0+cu111 torchaudio==0.9.0 torchvision==0.10.0+cu111

    C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified warnings.warn(f'Error checking compiler version for {compiler}: {error}') INFO: Could not find files for the given pattern(s). Traceback (most recent call last): File "C:/Work/SAM/scripts/inference.py", line 20, in from models.psp import pSp File "C:\Work\SAM\models\psp.py", line 12, in from models.encoders import psp_encoders File "C:\Work\SAM\models\encoders\psp_encoders.py", line 8, in from models.stylegan2.model import EqualLinear File "C:\Work\SAM\models\stylegan2\model.py", line 7, in from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d File "C:\Work\SAM\models\stylegan2\op_init_.py", line 1, in from .fused_act import FusedLeakyReLU, fused_leaky_relu File "C:\Work\SAM\models\stylegan2\op\fused_act.py", line 13, in os.path.join(module_path, 'fused_bias_act_kernel.cu'), File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1092, in load keep_intermediates=keep_intermediates) File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1303, in _jit_compile is_standalone=is_standalone) File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1401, in _write_ninja_file_and_build_library is_standalone=is_standalone) File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1834, in _write_ninja_file_to_build_library with_cuda=with_cuda) File "C:\Work\SAM\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1950, in _write_ninja_file 'cl']).decode().split('\r\n') File "C:\Users\affan\AppData\Local\Programs\Python\Python37\lib\subprocess.py", line 411, in check_output **kwargs).stdout File "C:\Users\affan\AppData\Local\Programs\Python\Python37\lib\subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.

    opened by affanmehmood 4
  • Downloading ffhq-dataset

    Downloading ffhq-dataset

    I want to train model on online available free dataset i.e. ffth-dataset which has size around 2.5 TB. I tried to download it from browser but it gives me an error. Is there any way to download it from terminal?

    opened by MetiCodePrivateLimited 4
  • Latent vector

    Latent vector

    I am trying to execute style_mixing.py but I am getting zero response. Following is the command,

    python scripts/style_mixing.py
    --exp_dir=to
    --checkpoint_path=trained/sam_ffhq_aging.pt
    --data_path=from
    --test_batch_size=4
    --test_workers=4
    --latent_mask=8,9
    --target_age=50

    What does it means i.e. (--latent_mask=8,9)? What is 8,9 in the latent mask as in the directory there are no files of these name? How can I get latent mask files from cloud?

    opened by MetiCodePrivateLimited 4
  • Getting error in compiling code

    Getting error in compiling code

    I am trying to execute following command:

    python scripts/inference.py \

    --exp_dir=/to
    --checkpoint_path=/trained/sam2.pt
    --data_path=/from
    --test_batch_size=4
    --test_workers=4
    --couple_outputs
    --target_age=0,10,20,30,40,50,60,70,80

    and I got following error:


    Traceback (most recent call last): File "scripts/inference.py", line 19, in from models.psp import pSp File "./models/psp.py", line 12, in from models.encoders import psp_encoders File "./models/encoders/psp_encoders.py", line 8, in from models.stylegan2.model import EqualLinear File "./models/stylegan2/model.py", line 7, in from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d File "./models/stylegan2/op/init.py", line 1, in from .fused_act import FusedLeakyReLU, fused_leaky_relu File "./models/stylegan2/op/fused_act.py", line 13, in os.path.join(module_path, 'fused_bias_act_kernel.cu'), File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 974, in load keep_intermediates=keep_intermediates) File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1179, in _jit_compile with_cuda=with_cuda) File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1251, in _write_ninja_file_and_build_library check_compiler_abi_compatibility(compiler) File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 248, in check_compiler_abi_compatibility if not check_compiler_ok_for_platform(compiler): File "/root/anaconda3/envs/sam_env/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 208, in check_compiler_ok_for_platform which = subprocess.check_output(['which', compiler], stderr=subprocess.STDOUT) File "/root/anaconda3/envs/sam_env/lib/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/root/anaconda3/envs/sam_env/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['which', 'c++']' returned non-zero exit status 1.


    Could you help me to fix this error?

    opened by MetiCodePrivateLimited 4
Owner
Computer vision research scientist and enthusiast
null
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Python implementation of a live deep learning based age/gender/expression recognizer

TUT live age estimator Python implementation of a live deep learning based age/gender/smile/celebrity twin recognizer. All components use convolutiona

Heikki Huttunen 80 Nov 21, 2022
AgeGuesser: deep learning based age estimation system. Powered by EfficientNet and Yolov5

AgeGuesser AgeGuesser is an end-to-end, deep-learning based Age Estimation system, presented at the CAIP 2021 conference. You can find the related pap

null 5 Nov 10, 2022
Age and Gender prediction using Keras

cnn_age_gender Age and Gender prediction using Keras Dataset example : Description : UTKFace dataset is a large-scale face dataset with long age span

XN3UR0N 58 May 3, 2022
Fuzzing tool (TFuzz): a fuzzing tool based on program transformation

T-Fuzz T-Fuzz consists of 2 components: Fuzzing tool (TFuzz): a fuzzing tool based on program transformation Crash Analyzer (CrashAnalyzer): a tool th

HexHive 244 Nov 9, 2022
Attention-based Transformation from Latent Features to Point Clouds (AAAI 2022)

Attention-based Transformation from Latent Features to Point Clouds This repository contains a PyTorch implementation of the paper: Attention-based Tr

null 12 Nov 11, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 3, 2022
Implement face detection, and age and gender classification, and emotion classification.

YOLO Keras Face Detection Implement Face detection, and Age and Gender Classification, and Emotion Classification. (image from wider face dataset) Ove

Chloe 10 Nov 14, 2022
Face Detection & Age Gender & Expression & Recognition

Face Detection & Age Gender & Expression & Recognition

Sajjad Ayobi 188 Dec 28, 2022
This is an easy python software which allows to sort images with faces by gender and after by age.

Gender-age Classifier This is an easy python software which allows to sort images with faces by gender and after by age. Usage First install Deepface

Claudio Ciccarone 6 Sep 17, 2022
This is a Keras implementation of a CNN for estimating age, gender and mask from a camera.

face-detector-age-gender This is a Keras implementation of a CNN for estimating age, gender and mask from a camera. Before run face detector app, expr

Devdreamsolution 2 Dec 4, 2021
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 19, 2021
Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

null 1 Jan 16, 2022
Character-Input - Create a program that asks the user to enter their name and their age

Character-Input Create a program that asks the user to enter their name and thei

PyLaboratory 0 Feb 6, 2022
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Kushal Shingote 2 Feb 10, 2022
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
Efficient Householder transformation in PyTorch

Efficient Householder Transformation in PyTorch This repository implements the Householder transformation algorithm for calculating orthogonal matrice

Anton Obukhov 49 Nov 20, 2022