Research Article Open Access

Multi-View RGB-D Video Analysis and Fusion for 360 Degrees Unified Motion Reconstruction

Naveed Ahmed1
  • 1 University of Sharjah, United Arab Emirates

Abstract

We present a new method for capturing human motion over 360 degrees by the fusion of multi-view RGB-D video data from Kinect sensors. Our method is able to reconstruct the unified human motion from fused RGB-D and skeletal data over 360 degrees and create a unified skeletal animation. We make use of all three streams: RGB, depth and skeleton, along with the joint tracking confidence state from Microsoft Kinect SDK to find the correctly oriented skeletons and merge them together to get a uniform measurement of human motion resulting in a unified skeletal animation. We quantitatively validate the goodness of the unified motion using two evaluation techniques. Our method is easy to implement and provides a simple solution of measuring and reconstructing a 360 degree plausible unified human motion that would not be possible to capture with a single Kinect due to tracking failures, self-occlusions, limited field of view and subject orientation.

Journal of Computer Science
Volume 13 No. 12, 2017, 795-804

DOI: https://doi.org/10.3844/jcssp.2017.795.804

Submitted On: 14 October 2017 Published On: 23 December 2017

How to Cite: Ahmed, N. (2017). Multi-View RGB-D Video Analysis and Fusion for 360 Degrees Unified Motion Reconstruction. Journal of Computer Science, 13(12), 795-804. https://doi.org/10.3844/jcssp.2017.795.804

  • 3,211 Views
  • 2,183 Downloads
  • 1 Citations

Download

Keywords

  • 3D Animation
  • Kinect
  • RGB-D Video
  • Motion Capture
  • Multi-View Video