papers AI Learner
The Github is limit! Click to go to the new site.

Vid2Game: Controllable Characters Extracted from Real-World Videos

2019-04-17
Oran Gafni, Lior Wolf, Yaniv Taigman

Abstract

We are given a video of a person performing a certain activity, from which we extract a controllable model. The model generates novel image sequences of that person, according to arbitrary user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first network maps a current pose, and a single-instance control signal to the next pose. The second network maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.08379

PDF

http://arxiv.org/pdf/1904.08379


Similar Posts

Comments