Download PDFOpen PDF in browser

Img2Motion: Learning to Drive 3D Avatars using Videos

EasyChair Preprint no. 1633

4 pagesDate: October 11, 2019

Abstract

This paper presents a novel neural network motion retargeting system that drives 3D rigged digital human avatars using videos. We study the problem of building a motion mapping between 2D video and 3D skeletons, in which the source characters can drive the target subjects with varying skeleton structures. In particular, the target 3D avatars may have different kinematic characteristics, e.g. bone lengths, skeleton scales, skeleton topologies, etc. The traditional motion retargeting is between pair to pair characters, especially 2D characters to 2D characters and 3D characters to 3D characters. There is a digital gap of using 2D characters’ animations to drive 3D rigged characters. These traditional techniques may not yet be capable of solving motion retargeting from 2D motions to 3D digital human avatars with sparse skeleton motion data. Inspired by these unsolved limitations, we present a pipeline of building neural network motion retargeting system, which can do motion retargeting from 2D videos to 3D rigged digital human avatars. This whole system with the effective pipeline can be used for game implementations, virtual reality system and also can generate a more comprehensive dataset with larger varieties of human poses by animating existing rigged human models.

Keyphrases: 3D pose estimation, Digital Human, motion retargeting

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:1633,
  author = {Junying Wang and Weikai Chen and Hao Li},
  title = {Img2Motion: Learning to Drive 3D Avatars using Videos},
  howpublished = {EasyChair Preprint no. 1633},

  year = {EasyChair, 2019}}
Download PDFOpen PDF in browser