样式: 排序: IF: - GO 导出 标记为已读
-
Towards the development of an automated robotic storyteller: comparing approaches for emotional story annotation for non-verbal expression via body language J. Multimodal User Interfaces (IF 2.9) Pub Date : 2024-04-04 Sophia C. Steinhaeusser, Albin Zehe, Peggy Schnetter, Andreas Hotho, Birgit Lugrin
-
Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-12-19 Zahra J. Muhsin, Rami Qahwaji, Faruque Ghanchi, Majid Al-Taee
-
Augmented reality and deep learning based system for assisting assembly process J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-12-14
Abstract In Industry 4.0, manufacturing entails a rapid change in customer demands which leads to mass customization. The variation in customer requirements leads to small batch sizes and several process variations. Assembly task is one of most important steps in any manufacturing process. A factory floor worker often needs a guidance system due to variations in product or process, to assist them in
-
Modelling the “transactive memory system” in multimodal multiparty interactions J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-11-11 Beatrice Biancardi, Maurizio Mancini, Brian Ravenet, Giovanna Varni
-
Model-based sonification based on the impulse pattern formulation J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-11-06 Simon Linke, Rolf Bader, Robert Mores
-
Three-dimensional sonification as a surgical guidance tool J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-27 Tim Ziemer
-
A study on the attention of people with low vision to accessibility guidance signs J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-26 Weitao Jiang, Bingxin Zhang, Ruiqi Sun, Dong Zhang, Shan Hu
-
An interdisciplinary journey towards an aesthetics of sonification experience J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-21 Mariana Seiça, Licínio Roque, Pedro Martins, F. Amílcar Cardoso
-
Multimodal exploration in elementary music classroom J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-18 Martha Papadogianni, Ercan Altinsoy, Areti Andreopoulou
-
Comparing alternative modalities in the context of multimodal human–robot interaction J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-19 Suprakas Saren, Abhishek Mukhopadhyay, Debasish Ghose, Pradipta Biswas
-
Hearing loss prevention at loud music events via real-time visuo-haptic feedback J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-13 Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem
-
A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-10-12 Xuan Liu, Jiachen Ma, Qiang Wang
-
In-vehicle air gesture design: impacts of display modality and control orientation J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-09-14 Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon
-
Pegasos: a framework for the creation of direct mobile coaching feedback systems J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-09-12 Martin Dobiasch, Stefan Oppl, Michael Stöckl, Arnold Baca
-
PepperOSC: enabling interactive sonification of a robot’s expressive movement J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-09-09 Adrian B. Latupeirissa, Roberto Bresin
-
Perceptually congruent sonification of auditory line charts J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-08-30 Joe Fitzpatrick, Flaithri Neff
-
Research on the application of gaze visualization interface on virtual reality training systems J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-08-18 Haram Choi, Joungheum Kwon, Sanghun Nam
-
Facial expression recognition via transfer learning in cooperative game paradigms for enhanced social AI J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-08-14 Paula Castro Sánchez, Casey C. Bennett
-
Exploring user-defined gestures for lingual and palatal interaction J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-08-10 Santiago Villarreal-Narvaez, Jorge Luis Perez-Medina, Jean Vanderdonckt
-
Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-08-04 Guoxuan Ning, Brianna Grant, Bill Kapralos, Alvaro Quevedo, K. C. Collins, Kamen Kanev, Adam Dubrowski
-
The cognitive basis for virtual reality rehabilitation of upper-extremity motor function after neurotraumas J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-07-27 Sophie Dewil, Shterna Kuptchik, Mingxiao Liu, Sean Sanford, Troy Bradbury, Elena Davis, Amanda Clemente, Raviraj Nataraj
-
SonAir: the design of a sonification of radar data for air traffic control J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-07-08 Elias Elmquist, Alexander Bock, Jonas Lundberg, Anders Ynnerman, Niklas Rönnberg
-
A low duration vibro-tactile representation of Braille characters J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-06-28 Özgür Tamer, Barbaros Kirişken, Tunca Köklü
-
Remote social touch framework: a way to communicate physical interactions across long distances J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-05-24 Ali Abdulrazzaq Alsamarei, Bahar Şener
-
Investigating the influence of agent modality and expression on agent-mediated fairness behaviours J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-05-23 Hiu Lam Yip, Karin Petrini
-
Personality trait estimation in group discussions using multimodal analysis and speaker embedding J. Multimodal User Interfaces (IF 2.9) Pub Date : 2023-02-08 Candy Olivia Mawalim, Shogo Okada, Yukiko I. Nakano, Masashi Unoki
-
Virtual reality can mediate the learning phase of upper limb prostheses supporting a better-informed selection process J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-12-21 Lucas El Raghibi, Ange Pascal Muhoza, Jeanne Evrard, Hugo Ghazi, Grégoire van Oldeneel tot Oldenzeel, Victorien Sonneville, Benoît Macq, Renaud Ronsse
-
The effects of olfactory cues as Interface notifications on a mobile phone J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-12-06 Miao Huang, Chien-Hsiung Chen
-
Theory-based approach for assessing cognitive load during time-critical resource-managing human–computer interactions: an eye-tracking study J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-11-28 Natalia Sevcenko, Tobias Appel, Manuel Ninaus, Korbinian Moeller, Peter Gerjets
-
Grouping and Determining Perceived Severity of Cyber-Attack Consequences: Gaining Information Needed to Sonify Cyber-Attacks J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-11-01 Keith S. Jones, Natalie R. Lodinger, Benjamin P. Widlus, Akbar Siami Namin, Emily Maw, Miriam Armstrong
-
TapCAPTCHA: non-visual CAPTCHA on touchscreens for visually impaired people J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-10-31 Mrim Alnfiai, Fawaz Alassery
-
Gesture-based guidance for navigation in virtual environments J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-09-29 Inam Ur Rehman, Sehat Ullah, Numan Ali, Ihsan Rabbi, Riaz Ullah Khan
-
Commanding a drone through body poses, improving the user experience J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-09-23 Brandon Yam-Viramontes, Héctor Cardona-Reyes, Javier González-Trejo, Cristian Trujillo-Espinoza, Diego Mercado-Ravell
-
Exploring visual stimuli as a support for novices’ creative engagement with digital musical interfaces J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-08-01 Yongmeng Wu, Nick Bryan-Kinns, Jinyi Zhi
-
Designing multi-purpose devices to enhance users’ perception of haptics J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-07-23 Riccardo Galdieri, Cristian Camardella, Marcello Carrozzino, Antonio Frisoli
-
A SLAM-based augmented reality app for the assessment of spatial short-term memory using visual and auditory stimuli J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-07-18 M.-Carmen Juan, Magdalena Mendez-Lopez, Camino Fidalgo, Ramon Molla, Roberto Vivo, David Paramo
-
Ipsilateral and contralateral warnings: effects on decision-making and eye movements in near-collision scenarios J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-05-25 Joost de Winter, Jimmy Hu, Bastiaan Petermeijer
-
The multimodal EchoBorg: not as smart as it looks J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-05-05 Sara Falcone, Jan Kolkmeier, Merijn Bruijnes, Dirk Heylen
-
A survey of challenges and methods for Quality of Experience assessment of interactive VR applications J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-04-29 Sara Vlahovic, Mirko Suznjevic, Lea Skorin-Kapov
-
A review on communication cues for augmented reality based remote guidance J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-04-12 Weidong Huang, Mathew Wakefield, Troels Ammitsbøl Rasmussen, Seungwon Kim, Mark Billinghurst
-
Combining haptics and inertial motion capture to enhance remote control of a dual-arm robot J. Multimodal User Interfaces (IF 2.9) Pub Date : 2022-01-09 Vicent Girbés-Juan, Vinicius Schettino, Luis Gracia, J. Ernesto Solanes, Yiannis Demiris, Josep Tornero
High dexterity is required in tasks in which there is contact between objects, such as surface conditioning (wiping, polishing, scuffing, sanding, etc.), specially when the location of the objects involved is unknown or highly inaccurate because they are moving, like a car body in automotive industry lines. These applications require the human adaptability and the robot accuracy. However, sharing the
-
The Audio-Corsi: an acoustic virtual reality-based technological solution for evaluating audio-spatial memory abilities J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-11-24 Walter Setti, Isaac Alonso-Martinez Engel, Luigi F. Cuturi, Monica Gori, Lorenzo Picinali
Spatial memory is a cognitive skill that allows the recall of information about the space, its layout, and items’ locations. We present a novel application built around 3D spatial audio technology to evaluate audio-spatial memory abilities. The sound sources have been spatially distributed employing the 3D Tune-In Toolkit, a virtual acoustic simulator. The participants are presented with sequences
-
Preliminary assessment of a multimodal electric-powered wheelchair simulator for training of activities of daily living J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-11-19 Felipe R. Martins, Eduardo L. M. Naves, Yann Morère, Angela A. R. de Sá
Driving an Electric-powered wheelchair requires a specific set of motor, visual and cognitive skills. One of the options is the use of wheelchair simulators to train the driving of the wheelchair in a controlled and completely safe environment. However, existing simulators do not have simultaneous characteristics of being multimodal and having training scenarios of activities of daily living. These
-
Importance of force feedback for following uneven virtual paths with a stylus J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-10-30 Federico Fontana, Francesco Muzzolini, Davide Rocchesso
It is commonly known that a physical textured path can be followed by indirect touch through a probe also in absence of vision if sufficiently informative cues are delivered by the other sensory channels, but prior research indicates that the level of performance while following a virtual path on a touchscreen depends on the type and channel such cues belong to. The re-enactment of oriented forces
-
The effect of eye movement sonification on visual search patterns and anticipation in novices J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-10-18 Maryam Khalaji, Mahin Aghdaei, Alireza Farsi, Alessandro Piras
Visual information is essential to successfully anticipate the direction of the shot in ball sports whereas using another sense in motor learning has received less attention. The present study aimed to examine whether the multisensory learning with the orienting visual attention through the sound would influence anticipatory judgments with respect to the visual system alone. Forty novice students were
-
Informing the design of a multisensory learning environment for elementary mathematics learning J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-10-11 Cuturi, Luigi F., Cappagli, Giulia, Yiannoutsou, Nikoleta, Price, Sara, Gori, Monica
It is well known that primary school children may face difficulties in acquiring mathematical competence, possibly because teaching is generally based on formal lessons with little opportunity to exploit more multisensory-based activities within the classroom. To overcome such difficulties, we report here the exemplary design of a novel multisensory learning environment for teaching mathematical concepts
-
Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-08-10 Fotis Liarokapis,Sebastian von Mammen,Athanasios Vourvopoulos
Human-computer interaction and multimodal user interfaces have evolved dramatically over the last few years offering novel ways of interaction for serious games and virtual environments. Multimodal user interfaces have progressed both in terms of hardware (i.e. basic I/O controls) to sophisticated sensor devices (i.e. body tracking, physiological sensors, etc.) as well as in terms of software from
-
TapCalculator: nonvisual touchscreen calculator for visually impaired people preliminary user study J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-08-09 Alnfiai, Mrim
TapCalculator is a novel, non-visual touchscreen calculator that uses simple gestures to represent digits and operations with the support of audio feedback and is designed specifically for visually impaired users. It enables users to do basic calculations without visually locating buttons on a screen or having to know braille or learning complex coding for digits. It uses a series of tapping and swiping
-
Combining audio and visual displays to highlight temporal and spatial seismic patterns J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-07-27 Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom
Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine
-
SoundSight: a mobile sensory substitution device that sonifies colour, distance, and temperature J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-07-02 Giles Hamilton-Fletcher, James Alvarez, Marianna Obrist, Jamie Ward
Depth, colour, and thermal images contain practical and actionable information for the blind. Conveying this information through alternative modalities such as audition creates new interaction possibilities for users as well as opportunities to study neuroplasticity. The ‘SoundSight’ App (www.SoundSight.co.uk) is a smartphone platform that allows 3D position, colour, and thermal information to directly
-
A wearable virtual touch system for IVIS in cars J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-06-22 Gowdham Prabhakar, Priyam Rajkhowa, Dharmesh Harsha, Pradipta Biswas
In automotive domain, operation of secondary tasks like accessing infotainment system, adjusting air conditioning vents, and side mirrors distract drivers from driving. Though existing modalities like gesture and speech recognition systems facilitate undertaking secondary tasks by reducing duration of eyes off the road, those often require remembering a set of gestures or screen sequences. In this
-
Interactive exploration of a hierarchical spider web structure with sound J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-06-21 Isabelle Su, Ian Hattwick, Christine Southworth, Evan Ziporyn, Ally Bisshop, Roland Mühlethaler, Tomás Saraceno, Markus J. Buehler
3D spider webs exhibit highly intricate fiber architectures and owe their outstanding performance to a hierarchical organization that spans orders of magnitude in length scale from the molecular silk protein, to micrometer-sized fibers, and up to cm-scale web. Similarly, but in a completely different physical manifestation, music has a hierarchical structure composed of elementary sine wave building
-
Correction to: A gaze-based interactive system to explore artwork imagery J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-05-31 Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe
A Correction to this paper has been published: https://doi.org/10.1007/s12193-021-00373-z
-
A gaze-based interactive system to explore artwork imagery J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-05-21 Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe
Interactive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive
-
Grounding behaviours with conversational interfaces: effects of embodiment and failures J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-03-24 Dimosthenis Kontogiorgos, Andre Pereira, Joakim Gustafson
Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment
-
RFID-based tangible and touch tabletop for dual reality in crisis management context J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-03-19 Walid Merrad, Alexis Héloir, Christophe Kolski, Antonio Krüger
Robots are becoming more and more present in many domains of our daily lives. Their usage encompasses industry, home automation, space exploration, and military operations. Robots can also be used in crisis management situations, where it is impossible to access or dangerous to send humans into the intervention area. The present work compares users’ performances on tangible and on touch user interfaces
-
Behavior and usability analysis for multimodal user interfaces J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-03-16 Hamdi Dibeklioğlu, Elif Surer, Albert Ali Salah, Thierry Dutoit
Multimodal interfaces offer ever-changing tasks and challenges for designers to accommodate newer technologies, and as these technologies become more accessible, newer application scenarios emerge. Prototype development and user evaluation are important steps in the creation of solutions to these challenges. Furthermore, playful interactions and games are shown to be important settings to study social
-
Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-03-15 Brianna J. Tomlinson, Bruce N. Walker, Emily B. Moore
Interactive simulations are tools that can help students understand and learn about complex relationships. While most simulations are primarily visual due to mostly historical reasons, sounds can be used to add to the experience. In this work, we evaluated sets of audio designs for two different, but contextually- and visually-similar simulations. We identified key aspects of the audio representations
-
Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-03-07 David Obremski, Jean-Luc Lugrin, Philipp Schaper, Birgit Lugrin
Having a mixed-cultural membership becomes increasingly common in our modern society. It is thus beneficial in several ways to create Intelligent Virtual Agents (IVAs) that reflect a mixed-cultural background as well, e.g., for educational settings. For research with such IVAs, it is essential that they are classified as non-native by members of a target culture. In this paper, we focus on variations
-
Training public speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback J. Multimodal User Interfaces (IF 2.9) Pub Date : 2021-03-06 Mathieu Chollet, Stacy Marsella, Stefan Scherer
Social signal processing and virtual social interaction technologies have allowed the creation of social skills training applications, and initial studies have shown that such solutions can lead to positive training outcomes and could complement traditional teaching methods by providing cheap, accessible, safe tools for training social skills. However, these studies evaluated social skills training