I lead the Multimodal Interaction group, part of GIST. My research focuses on using multiple sensory modalities and control mechanisms (particularly hearing, touch and gesture) to create a rich, natural interaction between human and computer. My work has a strong experimental focus, applying perceptual research to practical situations.
I am a researcher in Human-Computer Interaction, focussing on haptic interfaces, non-visual feedback and mobile interaction. My interests are predominantly the perceptual, cognitive and psychophysical aspects of interaction with computing devices: understanding the range and limits of human ability in terms of both input and output. My research has looked at different aspects of haptic interaction, including ultrasonic haptic feedback, pressure-based input on mobile devices and the design of thermal feedback for HCI.
I am a postdoctoral researcher in Human-Computer Interaction. I am mainly interested in gestural interaction. My Phd has focused on designing and evaluating new gestural interaction techniques in different contexts, such as in the living room or in situation of mobility, using remote controls or on-body interaction. My current research investigates the design and the evaluation of haptic feedback for mid-air hands free gestures.
I'm interested in novel interaction techniques and how the right feedback at the right time can make them easier to use. I am especially interested in mid-air gestures and non-visual feedback. At the moment I work on the Levitate project. I also maintain this website.
Matthew is based in Health and Wellbeing as well as Computing Science and is interested in Assistive Technology, which can help people compensate for memory difficulties. He has recently been focusing on the design of smartphone reminders, which can improve everyday independence for people with memory difficulties.
My research focuses on effectiveness, reliability and safety of multimodal displays for mid-air gestures in driving scenarios. I am investigating the influence of the used modalities, type of information, complexity of multimodal messages, driver distraction, and usability of such displays.
My research focuses on sonification for experience and education - primarily geared toward astronomy. I am interested in applying ideas and methods from perception, psychophysics and acoustics research to create effective sonification techniques. I am currently working on evaluating sonification methods for allowing visually impaired students to analyse data from robotic telescopes.
Patrizia Di Campli San Vito
I'm a PhD student doing research on thermal interaction in a car. I finished my BSc and MSc in Media Informatics at the University of Ulm in Germany and started my PhD in 2016. I am working with Jaguar Land Rover, being funded by them through an EPSRC iCase award.
Alberto González Olmos
I am doing my PhD in Human Computer Interaction at the University of Glasgow. My research is on Assistive Technologies for young people with Acquired Brain Injury. I am focusing on voice-operated technologies that can facilitate the rehabilitation process of people with ABI and the communication between them and their care providers. I am also an Early Stage Researcher at the TEAM Innovation Training Network.
Past members of the Multimodal Interaction Group, from 2013 onwards.
Past members pre-2013
- Ashley Walker – Ashley worked on 3D sound and its use in HCI, games sound and the combination of 3D sound and graphics
- Gregory Leplatre – Gregory worked on sound design for navigation in large menu structures like mobile telephones. He works at Edinburgh Napier University.
- Antti Pirhohen – Antti was a visiting researcher from July 2000 – July 2001. He is now at the University of Jyvaskyla in Finland.
- Ramesh Ramloll – He worked on the MultiVis project on audio and 3D audio interfaces for blind people.
- Murray Crease – Murray worked on the Toolkit project using sound to improve the usability of widgets, plus context and resource sensitivity. He now works for NRC in Canada.
- Ray Wai Yu – Ray worked on haptic and multimodal interfaces in the MultiVis project and has now moved to the Virtual Engineering Centre in Queens Belfast.
- Ian Oakley – He worked on haptic enhancement of desktop interfaces and collaborative haptics for his PhD. He now works for Media Lab Europe in Dublin.
- Jo Lumsden – Jo worked on the Toolkit project and then moved on to audio and gestures for mobile devices (in a forerunner of the AudioClouds project). She now works for NRC in Canada.
- Beate Riedel worked on the MultiVis project with Mike Burton in Psychology.
- Joy Goodman worked on the Utopia project designing systems for older people. She now works at the Engineering Design Centre in Cambridge.
- Georgios Marentakis now works on audio and gesture interfaces at McGill University in Canada.
- Lorna Brown now works at Sony in London.
- Steven Wall now works for Amberlight in London.
- Sarah Baillie now works at Bristol University.
- Johan Kildal now works on multimodal interaction at Nokia Research Center in Helsinki, Finland.
- Eve Hoggan now works at HIIT in Helsinki
- Tilman Dingler now works in the Stuttgart
- Craig Stewart now works at Dundee University
- Calkin Suero-Montero work in Finland
- Andrew Crossan is a lecturer at Glasgow Caledonian University
- Martin Halvey is a lecturer at Glasgow Caledonian University
- David McGookin worked on all things multimodal
- Ciaran Owens worked on pedestrian navigation and errors