Capturing and recording immersive VR sessions performed through HMDs in explorative virtual environments may offer valuable insights on users' behavior, scene saliency and spatial affordances. Collected data can support effort prioritization in 3D modeling workflow or allow fine-tuning of locomotion models for time-constrained experiences. The web with its recent specifications (WebVR/WebXR) represents a valid solution to enable accessible, interactive and usable tools for remote VR analysis of recorded sessions. Performing immersive analytics through common browsers however presents different challenges, including limited rendering capabilities. Furthermore, interactive inspection of large session records is often problematic due to network bandwidth or may involve computationally intensive encoding/decoding routines. This work proposes, formalizes and investigates flexible dynamic models to volumetrically capture user states and scene saliency during running VR sessions using compact approaches. We investigate image-based encoding techniques and layouts targeting interactive and immersive WebVR remote inspection. We performed several experiments to validate and assess proposed encoding models applied to existing records and within networked scenarios through direct server-side encoding, using limited storage and computational resources.

Encoding immersive sessions for online, interactive VR analytics

Fanini Bruno;
2019

Abstract

Capturing and recording immersive VR sessions performed through HMDs in explorative virtual environments may offer valuable insights on users' behavior, scene saliency and spatial affordances. Collected data can support effort prioritization in 3D modeling workflow or allow fine-tuning of locomotion models for time-constrained experiences. The web with its recent specifications (WebVR/WebXR) represents a valid solution to enable accessible, interactive and usable tools for remote VR analysis of recorded sessions. Performing immersive analytics through common browsers however presents different challenges, including limited rendering capabilities. Furthermore, interactive inspection of large session records is often problematic due to network bandwidth or may involve computationally intensive encoding/decoding routines. This work proposes, formalizes and investigates flexible dynamic models to volumetrically capture user states and scene saliency during running VR sessions using compact approaches. We investigate image-based encoding techniques and layouts targeting interactive and immersive WebVR remote inspection. We performed several experiments to validate and assess proposed encoding models applied to existing records and within networked scenarios through direct server-side encoding, using limited storage and computational resources.
2019
Istituto di Scienze del Patrimonio Culturale - ISPC
Data quantization
Immersive analytics
Session encoding
Virtual reality
WebVR
WebXR
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/362831
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 15
  • ???jsp.display-item.citation.isi??? ND
social impact