Capturing and recording immersive VR sessions performed through HMDs in explorative virtual environments may offer valuable insights on users' behavior, scene saliency and spatial affordances. Collected data can support effort prioritization in 3D modeling workflow or allow fine-tuning of locomotion models for time-constrained experiences. The web with its recent specifications (WebVR/WebXR) represents a valid solution to enable accessible, interactive and usable tools for remote VR analysis of recorded sessions. Performing immersive analytics through common browsers however presents different challenges, including limited rendering capabilities. Furthermore, interactive inspection of large session records is often problematic due to network bandwidth or may involve computationally intensive encoding/decoding routines. This work proposes, formalizes and investigates flexible dynamic models to volumetrically capture user states and scene saliency during running VR sessions using compact approaches. We investigate image-based encoding techniques and layouts targeting interactive and immersive WebVR remote inspection. We performed several experiments to validate and assess proposed encoding models applied to existing records and within networked scenarios through direct server-side encoding, using limited storage and computational resources.
Encoding immersive sessions for online, interactive VR analytics
Fanini Bruno;
2019
Abstract
Capturing and recording immersive VR sessions performed through HMDs in explorative virtual environments may offer valuable insights on users' behavior, scene saliency and spatial affordances. Collected data can support effort prioritization in 3D modeling workflow or allow fine-tuning of locomotion models for time-constrained experiences. The web with its recent specifications (WebVR/WebXR) represents a valid solution to enable accessible, interactive and usable tools for remote VR analysis of recorded sessions. Performing immersive analytics through common browsers however presents different challenges, including limited rendering capabilities. Furthermore, interactive inspection of large session records is often problematic due to network bandwidth or may involve computationally intensive encoding/decoding routines. This work proposes, formalizes and investigates flexible dynamic models to volumetrically capture user states and scene saliency during running VR sessions using compact approaches. We investigate image-based encoding techniques and layouts targeting interactive and immersive WebVR remote inspection. We performed several experiments to validate and assess proposed encoding models applied to existing records and within networked scenarios through direct server-side encoding, using limited storage and computational resources.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.