Interacting with a swarm of semiautonomous drones with SoundPainting Gestures
Langue
EN
Communication dans un congrès
Ce document a été publié dans
Unmanned and Swarming Conference: Research Challenges for Future Unmanned Systems and Autonomous Swarming, 2018-10-10, Bordeaux. 2018-10-10
Résumé en anglais
This work aims to explore the use of a gesture-based interaction language called SoundPainting to interact with a swarm of unmanned air systems (UaS). A swarm is not just a collection of UASs, it is much more, both in terms ...Lire la suite >
This work aims to explore the use of a gesture-based interaction language called SoundPainting to interact with a swarm of unmanned air systems (UaS). A swarm is not just a collection of UASs, it is much more, both in terms of supported features and in terms of issues. The goal of this communication is to present the issues linked to the interaction with an semiautonomous swarm by gestures. The key point is that Soundpainting has been designed for directing a set of improvising live performers, and, thus, we link this ability of improvisation to the capacity of decision of the autonomous drones. Soundpainting already integrates the notion of groups of entities and makes it possible to address one single entity of a set/subset, still being able to address the set as a whole. Indeed, Soundpainting allows a real exchange and an adaptive dialogue between the soundpainter (here is the pilot) and the group, enabling contextual interpretation by each individual, and generating rich interaction and dialogue. In the proposed approach, a pilot performs SoundPainting gestures in front of a Kinect sensor (©Microsoft), then, the gesture recognized by a gesture recognition system is sent to the control command system in order to direct the drones ‘movements. On a more general context, our perspective, beyond the definition of a Soundpainting based gestural interaction used to interact with individual UaS or with swarms of UaSs, is to experiment it, first within a simulator, then within flying platforms, and finally within real-world scenarios. We will hopefully come with answers to the following questions: How relevant gestures are? How reliable is their detection? How is the User Experience?< Réduire
Unités de recherche