With the aim to support the development of novel XR-based applications targeting the educational domain, XR2Learn will design, develop and make available to its members a number of enablers to accelerate the development of XR learning applications: The consortium members will deliver a set of open-source components/libraries as enablers to the third parties that will be funded for building XR-based learning applications during the project lifetime.
XR2Learn Enablers
INTERACT: Authoring Tool for XR application development
Unimodal Tools for building emotion classifiers (fine tuning task in SSL)
Pre-trained representations and training tools as part of the enabler (audio, wearable devices, body tracking)
Affect Detection Enablers (pre-text task in SSL)
Multimodal fusion tools as part of the enabler (feature or decision based)
Learning process personalisation enabler (based on the Theory of flow)
The enablers are planned to be delivered in a way that also encourages users to go beyond the supported functionalities, giving space for creating their own novel enablers by adjusting the code to more modalities and downstream tasks (e.g., Classical Emotion Recognition, Activity Recognition, and others)
Enabler 1 (INTERACT) is an authoring tool, designed as a plugin for Unity that is used to simplify human-centered processes, allowing faster and iterative decision making but also building training scenarios. This plugin is a no-code and generic tool for creating physics-based VR training scenarios tailored to be fully compatible with the XR2Learn platform.
INTERACT is based on a cutting edge physics engine allowing realistic interactions such as collision detection and ergonomic evaluations. The plugin allows any of its users to create physically realistic VR simulations for multiple usages such as training for various fields of applications from 3D data that are to be imported in the authoring tool.
Enablers 2 and 3 will be delivered in the form of an open-source library, which allows developers to pre-train affect representation models in self-supervised learning (SSL) fashion using unannotated audio, speech, or VR body-tracking data, as well as use models pre-trained on the large open-source Affect Recognition datasets. These models can later be fine-tuned.
Combined in another open-source repository, Enablers 4 and 5 will provide supervised Affect Recognition routines, where users can build affect or emotion recognition models (previously pre-trained via Enablers 2 and 3) using data labeled according to a certain emotional model.
Finally, all previously trained models can later be used in an inference mode within Enabler 6, which aims to capture affective states during VR training sessions and provide recommendations on difficulty-level adjustments and interventions in the training strategies.