Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this work a framework studied and developed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and a corresponding ontology provide the formal underpinning for automatic posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking exploiting non-standard inference services allows to: (i) detect postures by comparing on-the fly the retrieved annotations with standard posture descriptions stored as individuals in a proper Knowledge Base; (ii) compare subsequent postures in order to describe and recognize gestures; (iii) compare subsequent gestures in order to describe and recognize actions. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility and usefulness of the proposed approach.
Semantic-Based Annotation and Inference to support Activity Detection / DI SUMMA, Maria. - (2014), pp. 1-50.
Semantic-Based Annotation and Inference to support Activity Detection
Maria di Summa
2014
Abstract
Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this work a framework studied and developed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and a corresponding ontology provide the formal underpinning for automatic posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking exploiting non-standard inference services allows to: (i) detect postures by comparing on-the fly the retrieved annotations with standard posture descriptions stored as individuals in a proper Knowledge Base; (ii) compare subsequent postures in order to describe and recognize gestures; (iii) compare subsequent gestures in order to describe and recognize actions. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility and usefulness of the proposed approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.