PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Overview

Learning Character-Agnostic Motion for Motion Retargeting in 2D

We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019.

Prerequisites

  • Linux
  • CPU or NVIDIA GPU + CUDA CuDNN
  • Python 3
  • PyTorch 0.4

Getting Started

Installation

  • Clone this repo

    git clone https://github.com/ChrisWu1997/2D-Motion-Retargeting.git
    cd 2D-Motion-Retargeting
  • Install dependencies

    pip install -r requirements.txt

    Note that the imageio package requires ffmepg and there are several options to install ffmepg. For those who are using anaconda, run conda install ffmpeg -c conda-forge is the simplest way.

Run demo examples

We provide pretrained models and several video examples, along with their OpenPose outputs. After run, the results (final joint positions + videos) will be saved in the output folder.

  • Run the full model to combine motion, skeleton, view angle from three input videos:

    python predict.py -n full --model_path ./model/pretrained_full.pth -v1 ./examples/tall_man -v2 ./examples/small_man -v3 ./examples/workout_march -h1 720 -w1 720 -h2 720 -w2 720 -h3 720 -w3 720 -o ./outputs/full-demo --max_length 120

    Results will be saved in ./outputs/full-demo:

  • Run the full model to do interpolation between two input videos. For example, to keep body attribute unchanged, and interpolate in motion and view axis:

    python interpolate.py --model_path ./model/pretrained_full.pth -v1 ./examples/model -v2 ./examples/tall_man -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/interpolate-demo.mp4 --keep_attr body --form matrix --nr_sample 5 --max_length 120

    You will get a matrix of videos that demonstrates the interpolation results:

  • Run two encoder model to transfer motion and skeleton between two input videos:

    python predict.py -n skeleton --model_path ./model/pretrained_skeleton.pth -v1 ./examples/tall_man -v2 ./examples/small_man -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/skeleton-demo --max_length 120
  • Run two encoder model to transfer motion and view angle between two input videos:

    python predict.py -n view --model_path ./model/pretrained_view.pth -v1 ./examples/tall_man -v2 ./examples/model -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/view-demo --max_length 120

Use your own videos

To run our models with your own videos, you first need to use OpenPose to extract the 2D joint positions from the video, then use the resulting JSON files as described in the demo examples.

Train from scratch

Prepare Data

  • Download Mixamo Data

    For the sake of convenience, we pack the Mixamo Data that we use. To download it, see Google Drive or Baidu Drive (8jq3). After downloading, extract it into ./mixamo_data.

    NOTE: Our Mixamo dataset only covers a part of the whole collections provided by the Mixamo website. If you want to collect Mixamo Data by yourself, you can follow the our guide here. The downloaded files are of fbx format, to convert it into json/npy (joints 3d position), you can use our script dataset/fbx2joints3d.py(requires blender 2.79).

  • Preprocess the downloaded data

    python ./dataset/preprocess.py
    

Train

  • Train the full model (with three encoders) on GPU:

    python train.py -n full -g 0
    

    Further more, you can select which structure to train and which loss to use through command line arguments:

    -n : Which structure to train. 'skeleton' / 'view' for 2 encoders system to transfer skeleton/view. 'full' for full system with 3 encoders.

    —disable_triplet: To disable triplet loss. By default, triplet loss is used.

    —use_footvel_loss: To use foot velocity loss.

Citation

If you use this code for your research, please cite our paper:

@article{aberman2019learning,
  author = {Aberman, Kfir and Wu, Rundi and Lischinski, Dani and Chen, Baoquan and Cohen-Or, Daniel},
  title = {Learning Character-Agnostic Motion for Motion Retargeting in 2D},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {38},
  number = {4},
  pages = {75},
  year = {2019},
  publisher = {ACM}
}

Comments
  • How to convert the coordinates from mixamo to normal 2d pose coordinates?

    How to convert the coordinates from mixamo to normal 2d pose coordinates?

    Hi, can tell me the pipeline for getting 2d pose coordinates from files from mixamo? I run your code (https://github.com/ChrisWu1997/2D-Motion-Retargeting/blob/7eaae7e87e927d279ad91e703b6e8f8b4d482f64/dataset/fbx2joints3d.py#L89), but found some problems with bpy 2.9.

    opened by annopackage 15
  • 训练出错

    训练出错

    您好, 我用python3 train.py -n full -g 0来训练的时候,出现了下面的错误: Traceback (most recent call last): File "train.py", line 101, in main() File "train.py", line 39, in main train_loader = get_dataloader('train', config, config.batch_size, config.num_workers) File "/home/sjwang/python_project/2D-Motion-Retargeting-master/dataset/init.py", line 17, in get_dataloader num_workers=num_workers, worker_init_fn=lambda _: np.random.seed()) File "/home/sjwang/py/python3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 802, in init sampler = RandomSampler(dataset) File "/home/sjwang/py/python3/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 64, in init "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integeral value, but got num_samples=0

    应该怎么解决呢?

    opened by wwwjs 7
  • Reference works for 3D pose estimation problems

    Reference works for 3D pose estimation problems

    Hi @ChrisWu1997, I have reworked your project to apply on to my dataset and it seems to bit learning things quite good. I am curious about how to perform same for 3D setting or atleast some other problem like motion synthesis. I am new to this field and quite interested to learn from more projects like yours. Can you refer to me a review paper or some github repo as a starting point that has performed in 3D setting. Thank you.

    opened by RahhulDd 6
  • Version of blender and bpy

    Version of blender and bpy

    Hi, thanks for your great work. However, I want to know the version of blender api you used and could you provide the link to the corresponding pypi release link? Currently, the version of blender is v2.9.1. However, while running dataset/fbx2joint3d.py, it throws errors:


    File "dataset/fbx2joints3d.py", line 202, in main() File "dataset/fbx2joints3d.py", line 124, in main set_homefile(HOME_FILE_PATH) File "dataset/fbx2joints3d.py", line 58, in set_homefile bpy.data.objects['Lamp'].data.energy = 2 KeyError: 'bpy_prop_collection[key]: key "Lamp" not found' Segmentation fault (core dumped)

    opened by annopackage 4
  • Error with custom JSON keypoints extracted from openpose

    Error with custom JSON keypoints extracted from openpose

    Hi, thank you for the interesting work and code.

    I tried extracting skeleton keypoints from OpenPose (JSON files) then ran: python interpolate.py --model_path ./model/pretrained_full.pth -v1 ./examples/video1 -v2 ./examples/video2 -h1 720 -w1 720 -h2 720 -w2 720 -o ./outputs/interpolate-demo-custom.mp4 --keep_attr body --form matrix --nr_sample 5 --max_length 120

    The following is the error I got: RuntimeError: invalid argument 4: Padding size should be less than the corresponding input dimension, but got: padding (3, 3) at dimension 2 of input [1 x 30 x 2] at c:\programdata\miniconda3\conda-bld\pytorch_1533090623466\work\aten\src\thcunn\generic/TemporalReflectionPadding.cu:32

    Any idea why? Thanks!

    opened by chuanenlin 4
  • How does two encoder model compare with three encoder model?

    How does two encoder model compare with three encoder model?

    Hi there,

    Thank you for the great work! But I wonder why you trained three models but not only a three encoder model(full model). You can definitely transfer the motion and view using a three encoder model. Does two encoder model perform better on motion feature extraction?

    Thanks!

    opened by rozentill 4
  • Extract latent representation information?

    Extract latent representation information?

    From Fig. 9, 10, 11 of the paper, I saw that clustering visualizes the latent spaces distributions (view, skeleton, motion). I am wondering if it is possible to output the latent space data (e.g., camera-view angle in degrees of the subject for a specific frame)?

    opened by chuanenlin 3
  • Where is the

    Where is the "Temporal clipping" part ?

    Hello, I am very interested in you paper :) While looking at your code, I can't find the "temporal clipping" part (paper page 7) for data augmentation in your code. I want to know where is the implementation about " in every iteration we randomly select the temporal length from the set T ∈ {64, 56, 48, 40}"

    Thank you.

    opened by DK-Jang 3
  • Questions about the network architecture

    Questions about the network architecture

    Hi, I'm reading your paper and code very carefully recently, and have to say 'nice work'! But I have several questions about the network architecture.

    1. In the code, I noticed that whenever you encode the body and the view angle, you dropped out the two last rows(axis=1) from the input tensor, and I was confused and was wondering why? At first glance, I thought you put the pelvis joint at the end of the joint tensor, but I couldn't find any evidence, could you be kind to explain it? Below is the code that baffled me: m1 = self.mot_encoder(x1) b2 = self.body_encoder(x2[:, :-2, :]).repeat(1, 1, m1.shape[-1]) v3 = self.view_encoder(x3[:, :-2, :]).repeat(1, 1, m1.shape[-1])

    2. In the common.py, you initialized the network input and output channels as following:

           self.mot_en_channels = [self.len_joints **+ 2**, 64, 96, 128]
           self.body_en_channels = [self.len_joints, 32, 48, 64, 16]
           self.view_en_channels = [self.len_joints, 32, 48, 64, 8]
           self.de_channels = [self.mot_en_channels[-1] + self.body_en_channels[-1] + self.view_en_channels[-1],
                               128, 64, self.len_joints + 2]
      
           self.meanpose_path = './mixamo_data/meanpose_with_view.npy'
           self.stdpose_path = './mixamo_data/stdpose_with_view.npy'
      

    I was wondering why you need two more channels for the motion encoder? Shouldn't it be the same as the body and view channels since the number of joints are the same?

    Many thanks!

    opened by catherineytw 2
  • Skeleton keypoints numbers

    Skeleton keypoints numbers

    Hi Chris, Great work! May I ask why you just used 15 keypoints and removed the other 10 in Openpose. For example, can we extract the left/right eye/ear information from the Mixamo fbx?

    Thanks!

    opened by HighlyAuditory 2
  • About .json to .npy

    About .json to .npy

    Thank you for your excellent work! The code fbx2joints3d.py convert .fbx files into .json format. But only .npy format files could be recognized in the code preprocess.py and also the subsequent process. So how can I prepare my own dataset? Or how can I convert json files to npy files like you did?

    opened by iluvrachel 2
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In 2D-Motion-Retargeting, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    numpy==1.16.2
    scipy==1.1.0==1.1.0
    tensorboardX==1.2
    torch==0.4.1
    tqdm==4.15.0
    imageio==2.3.0
    opencv-python==3.4.4.19
    sklearn==0.0
    scikit-learn==0.21.3
    matplotlib==3.1.1
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3. The version constraint of dependency scipy can be changed to >=0.12.0,<=1.7.3. The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0. The version constraint of dependency imageio can be changed to >=1.1-linux32,<=2.19.3. The version constraint of dependency scikit-learn can be changed to >=0.15.0b1,<=0.21.3.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the numpy
    numpy.linalg.norm
    
    The calling methods from the scipy
    scipy.ndimage.gaussian_filter1d
    
    The calling methods from the tqdm
    tqdm.tqdm
    tqdm.tqdm.set_postfix
    tqdm.tqdm.set_description
    
    The calling methods from the imageio
    imageio.get_writer
    
    The calling methods from the scikit-learn
    sklearn.manifold.TSNE
    sklearn.decomposition.PCA.fit_transform
    sklearn.decomposition.PCA
    sklearn.manifold.TSNE.fit_transform
    
    The calling methods from the all methods
    numpy.zeros_like
    net.to.to
    self.net.cross
    Encoder
    functional.visualization.interpolate_color
    matplotlib.pyplot.legend
    torch.nn.Upsample
    cv2.resize
    cv2.addWeighted.astype
    format
    functional.visualization.joints2image
    self.net.eval
    csv.DictReader
    handle2x
    self.augmentation
    torch.nn.LeakyReLU
    torch.nn.Dropout
    v2.args.path2.dataloder.dataset.preprocessing.unsqueeze
    interpolate_as_form
    tensorboardX.SummaryWriter.add_scalar
    torch.cat
    input2.detach.clone
    numpy.zeros_like.reshape
    sys.stdout.flush
    functional.motion.normalize_motion
    os.path.abspath
    json.load
    self.net.parameters
    loss_dcit.values
    input122.detach.clone
    sys.path.append
    clock.tick
    sklearn.decomposition.PCA
    numpy.save
    test
    torch.nn.MSELoss
    common.config.initialize
    input211.detach.clone
    motion.detach.cpu
    self.gen_aug_param
    jointDict.np.array.reshape
    np.stack
    torch.nn.Sequential
    numpy.linalg.norm
    info.np.array.reshape
    self.MixamoDatasetForFull.super.__init__
    model.get_autoencoder.load_state_dict
    outputs.items
    math.degrees
    list
    numpy.rad2deg
    os.close
    torch.optim.Adam
    trans_motion2d
    img_iterps.append
    argparse.ArgumentParser.add_argument
    numpy.random.seed
    print
    csv.DictWriter
    motion_trans.reshape.reshape
    x1.self.body_encoder.repeat.reshape
    functional.visualization.visulize_motion_in_training.items
    copy.copy
    char_data.append
    sklearn.manifold.TSNE
    self.update_network
    model.get_autoencoder.mot_encoder
    time.time
    model.get_autoencoder.transfer
    model.get_autoencoder.to
    self.optimizer.step
    agent.agents.Agent2x
    param.keys
    x2.self.view_encoder.repeat
    csv.DictWriter.writerow
    min
    batch_motion.repeat
    os.listdir
    self.MixamoDatasetForView.super.__init__
    mathutils.Vector
    torch.stack.append
    int
    self.preprocessing
    logfile.open.close
    numpy.max
    Decoder
    model.get_autoencoder
    vec_interpolate
    input1.detach.clone
    numpy.array.tolist
    torch.stack
    model.get_autoencoder.static_encoder
    self.AutoEncoder3x.super.__init__
    self.logger.info
    functional.motion.postprocess_motion2d.to
    self.Agent2x.super.__init__
    self.body_encoder.reshape
    motion_trans.torch.Tensor.unsqueeze
    nr_cv.labels.np.tile.reshape
    os.path.exists
    self.net.cross_with_triplet
    torch.device
    self.tripletloss
    v.item
    bounding_box
    np.array
    x1.self.view_encoder.repeat
    dataset.get_dataloader
    tqdm.tqdm.set_postfix
    bpy.ops.wm.read_homefile
    self.reset
    numpy.matmul
    math.cos
    x.split
    self.AutoEncoder2x.super.__init__
    numpy.mean
    out.detach
    convpool
    self.net.cpu
    functional.motion.openpose2motion
    tensorboardX.SummaryWriter
    torch.utils.data.DataLoader
    self.Encoder.super.__init__
    PIL.Image.fromarray
    torch.Tensor
    self.static_encoder
    sklearn.manifold.TSNE.fit_transform
    model.get_autoencoder.decoder
    super
    hasattr
    csv.DictWriter.writerows
    x1.self.view_encoder.repeat.repeat
    rgb.append
    functional.utils.TrainClock
    math.radians
    centers.centers.np.zeros.np.c_.reshape
    numpy.load
    enumerate
    x.self.body_encoder.repeat.repeat
    self.mot_encoder.reshape
    handle3x
    set_homefile
    h.lstrip.lstrip
    imageio.get_writer.close
    x2.self.body_encoder.repeat
    os.dup
    vec_interpolate.repeat
    x1.self.body_encoder.repeat
    motion.detach.cpu.numpy
    heatmaps.np.stack.transpose.append
    json.dump
    next
    isinstance
    self.net.cpu.state_dict
    c1.repeat.repeat
    os.makedirs
    self.global_pool
    normalize_motion
    argparse.ArgumentParser.parse_args
    x.reshape.reshape
    x2.self.body_encoder.repeat.repeat
    scipy.ndimage.gaussian_filter1d.reshape
    cluster_motion
    numpy.zeros
    heatmaps.np.stack.transpose
    rotate_coordinates
    numpy.cross
    cv2.addWeighted.copy
    logging.basicConfig
    clock.tock
    self.net.train
    self.model
    functional.motion.normalize_motion_inv.reshape
    tqdm.tqdm
    numpy.tile
    torch.nn.ReflectionPad1d
    numpy.where
    os.path.split
    math.atan2
    functional.motion.postprocess_motion2d
    out.detach.cpu.numpy
    self.net.to
    numpy.concatenate.append
    nr_sample.torch.linspace.to
    canvas_cropped.astype
    self.MixamoDatasetForSkeleton.super.__init__
    numpy.savez
    matplotlib.pyplot.switch_backend
    math.sin
    input221.detach.clone
    dataset.datasets.MixamoDatasetForSkeleton
    cv2.ellipse2Poly
    cv2.addWeighted.fill
    argparse.ArgumentParser
    model.get_autoencoder.body_encoder
    data.reshape.reshape
    functional.motion.normalize_motion_inv
    normalize_motion_inv
    model.networks.AutoEncoder3x
    numpy.linspace
    pose2im_all
    self.merge_headers
    data.contiguous.view
    functional.motion.preprocess_motion2d
    torch.save
    bpy.ops.wm.save_as_mainfile
    self.conv1x1
    get_joint3d_positions
    bpy.data.objects.remove
    nr_view.np.arange.reshape
    out.detach.cpu
    numpy.arange
    self.mot_encoder
    features.detach.cpu.numpy
    functional.utils.ensure_dirs
    numpy.eye
    ordered_dict.keys
    dataloder.dataset.preprocessing
    input21.detach.clone
    x2.self.view_encoder.repeat.reshape
    tensorboardX.SummaryWriter.add_image
    matplotlib.pyplot.figure
    v2.alpha.v1.alpha.repeat
    nr_anims.np.arange.reshape
    imageio.get_writer.append_data
    dataset.MixamoDatasetForFull
    x3.self.view_encoder.repeat
    matplotlib.cm.rainbow
    sum.backward
    join
    model.append
    Config
    scipy.ndimage.gaussian_filter1d
    len
    motion.detach.cpu.numpy.reshape
    sorted
    name.data.to
    glob.glob
    features.detach.cpu.numpy.reshape.detach
    numpy.cos
    input12.detach.clone
    functional.visualization.visulize_motion_in_training
    x.self.body_encoder.repeat
    functional.utils.ensure_dir
    open
    dataset.MixamoDatasetForFull.get_cluster_data
    main
    numpy.ones
    scipy.ndimage.gaussian_filter1d.append
    numpy.std
    features.detach.cpu
    np.stack.append
    agent.get_training_agent
    torch.load
    data.shape.data.shape.data.contiguous.view.to
    agent.get_training_agent.train_func
    functional.visualization.motion2video
    tqdm.tqdm.set_description
    bpy.context.scene.frame_set
    pad_to_16x
    functional.motion.get_local3d
    nr_char.np.arange.reshape
    self.decoder
    self.view_encoder.reshape
    x.self.view_encoder.repeat
    matplotlib.pyplot.scatter
    numpy.stack
    tsne_on_pca
    functional.utils.cycle
    model.get_autoencoder.view_encoder
    cv2.circle
    range
    self.preprocessing.detach
    name.split
    imageio.get_writer
    json2npy
    numpy.random.uniform
    logging.getLogger
    x1.self.view_encoder.repeat.reshape
    numpy.load.copy
    ValueError
    get_joint3d_positions.extend
    clear_scene_and_import_fbx
    numpy.random.randint
    losses.items
    str
    agent.get_training_agent.update_learning_rate
    numpy.min
    dataset.datasets.MixamoDatasetForFull
    numpy.sin
    ensure_dir
    view_data.append
    torch.optim.lr_scheduler.ExponentialLR
    interpolate
    functional.visualization.hex2rgb
    agent.get_training_agent.save_network
    self.body_encoder
    os.open
    input121.detach.clone
    functional.motion.trans_motion3d
    torch.nn.Conv1d
    functional.motion.trans_motion_inv
    self.mse
    cv2.fillConvexPoly
    alphas.alpha.v2.alpha.v1.alpha.torch.cat.repeat
    bpy.ops.import_scene.fbx
    model.get_autoencoder.transfer_three
    self._MixamoDatasetBase.super.__init__
    self.Agent3x.super.__init__
    v.item.items
    functional.motion.get_foot_vel
    self.forward
    os.path.join
    numpy.tile.reshape
    sum
    net.body_encoder.repeat
    agent.get_training_agent.val_func
    data.reshape.contiguous
    nr_mv.labels.np.tile.reshape
    matplotlib.pyplot.tight_layout
    gen_meanpose
    numpy.array
    trans_motion_inv
    np.save
    int.nr_anims.data.shape.np.linspace.tolist
    self.net.load_state_dict
    net.view_encoder.repeat
    torch.cuda.is_available
    self.static_encoder.reshape
    v.item.values
    cluster_body
    self.optimizer.zero_grad
    agent.agents.Agent3x
    dataset.datasets.MixamoDatasetForView
    model.networks.AutoEncoder2x
    cluster_view
    input112.detach.clone
    features.detach.cpu.numpy.reshape
    cv2.addWeighted
    save_image
    rgb2rgba
    numpy.random.choice
    shutil.rmtree
    x2.self.view_encoder.repeat.repeat
    self.build_item
    self.scheduler.step
    get_meanpose
    PIL.Image.fromarray.save
    x2.self.body_encoder.repeat.reshape
    dataset.get_meanpose
    functional.motion.trans_motion3d.reshape
    x1.self.body_encoder.repeat.repeat
    x2.self.static_encoder.repeat
    csv.DictWriter.writeheader
    torch.no_grad
    joints2image
    torch.from_numpy
    sklearn.decomposition.PCA.fit_transform
    v1.args.path1.dataloder.dataset.preprocessing.unsqueeze
    torch.nn.TripletMarginLoss
    functional.utils.pad_to_height
    input212.detach.clone
    collections.OrderedDict
    model.get_autoencoder.eval
    pose2im
    self.view_encoder
    nr_mc.labels.np.tile.reshape
    matplotlib.pyplot.savefig
    numpy.concatenate
    scipy.ndimage.gaussian_filter1d.detach
    self.Decoder.super.__init__
    scipy.ndimage.gaussian_filter1d.copy
    torch.linspace
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • Bump numpy from 1.16.2 to 1.22.0

    Bump numpy from 1.16.2 to 1.22.0

    Bumps numpy from 1.16.2 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Does it works on fixed images ?

    Does it works on fixed images ?

    Hi there,

    Congratulations for your impressive work !!

    May I use your library to convert 2D picture to another 2D picture ?

    For example : Picture A with standing person + Picture B with sitting person => Picture A with sitting person

    Thx,

    opened by fulsi 3
  • Bump opencv-python from 3.4.4.19 to 4.2.0.32

    Bump opencv-python from 3.4.4.19 to 4.2.0.32

    Bumps opencv-python from 3.4.4.19 to 4.2.0.32.

    Release notes

    Sourced from opencv-python's releases.

    4.2.0.32

    OpenCV version 4.2.0.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    3.4.9.31

    OpenCV version 3.4.9.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    4.1.2.30

    OpenCV version 4.1.2.

    Changes:

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Applying Interpolation for Multi-view Pose Estimation

    Applying Interpolation for Multi-view Pose Estimation

    Hi there.

    Great paper!

    I wonder if we could use interpolation for multi-view pose estimation like so:

    1. Record the same scene from from 2 cameras of different view points

    This would create 2 videos of the same person/body/skeleton, same motion but at different view points

    1. Run interpolation.py on the 2 videos (time-synced) set keep_attr to none to allow averaging of the body attribute, averaging of the motion (reduce errors) and interpolation of the view axis

    The interpolation should then result in a series of angle/view transformation from camera 1 to camera 2

    1. To extract 3D data, we some how step through the transformation from camera 1 to camera 2

    However, each step in the interpolation may not be geometrically propositional to a step in an angle

    Q1. Can some one confirm if a step in the interpolation can be proportional to angular change?

    Q2. It is plausible to adapt the interpolation code to work with more than 2 videos? Theoretically, unlimited?

    This line seems to be key (need to generate 2D alphas for 3 videos, 3D alphas for 4 videos, etc?) https://github.com/ChrisWu1997/2D-Motion-Retargeting/blob/7eaae7e87e927d279ad91e703b6e8f8b4d482f64/interpolate.py#L17

    And again, awesome paper!

    Probably related issue: https://github.com/ChrisWu1997/2D-Motion-Retargeting/issues/5

    Thanks!

    opened by NicksonYap 1
Owner
Rundi Wu
PhD student at Columbia University
Rundi Wu
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work ?? Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 9, 2022
Add-on for importing and auto setup of character creator 3 character exports.

CC3 Blender Tools An add-on for importing and automatically setting up materials for Character Creator 3 character exports. Using Blender in the Chara

null 260 Jan 5, 2023
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 2, 2023
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

null 1.1k Dec 27, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

null 748 Nov 27, 2021
PyTorch implementation of the supervised learning experiments from the paper Model-Agnostic Meta-Learning (MAML)

pytorch-maml This is a PyTorch implementation of the supervised learning experiments from the paper Model-Agnostic Meta-Learning (MAML): https://arxiv

Kate Rakelly 516 Jan 5, 2023
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
Character Controllers using Motion VAEs

Character Controllers using Motion VAEs This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at

Electronic Arts 165 Jan 3, 2023
Implementation of the paper "Language-agnostic representation learning of source code from structure and context".

Code Transformer This is an official PyTorch implementation of the CodeTransformer model proposed in: D. Zügner, T. Kirschstein, M. Catasta, J. Leskov

Daniel Zügner 131 Dec 13, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

PyTorch implementation of CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition The unofficial code of CDistNet. Now, we ha

null 25 Jul 20, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

null 54 Dec 6, 2022
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 5, 2023
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

null 101 Jan 1, 2023
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

null 30 Dec 24, 2022