15/7/15 MIG at Interact 2015

We’ll be at Interact 2015 in Germany in September, where we are presenting a poster and an interactive demo. Both are about interactive light feedback for gesture systems.

  • E. Freeman, S. Brewster, and V. Lantz.: Interactive Light Feedback: Illuminating Above-Device Gesture Interfaces. To appear in Interact ’15 Demos.
  • E. Freeman, S. Brewster, and V. Lantz.: Towards In-Air Gesture Control of Household Appliances with Limited Displays. To appear in Interact ’15 Posters.

3/6/15 TVX 2015

We have two full papers at ACM TVX’15, this year held in sunny Brussels, Belgium! Mark will be presenting them both (video previews below):

  • McGill, M., Williamson, J., Brewster, S.: It Takes Two (To Co-View): Collaborative Multi-View TV. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video – TVX’15, ACM, pp. 23-32, 2015.  Link.
  • McGill, M., Williamson, J., and Brewster, S.: Who’s the Fairest of Them All: Device Mirroring for the Connected Home. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video – TVX’15, ACM, pp. 83-92, 2015.  Link.


6/3/15 CHI Preview #4: ABBI Poster

Effects of Sound Type on Recreating the Trajectory of a Moving Source

by G. Wilson, S. Brewster, H. Caltenco, C. Magnusson, S. Finocchietti, G. Baud-Bovy and M. Gori

This work-in-progress paper from the ABBI project will be presented as a poster at CHI ’15.

Abstract

The ABBI (Audio Bracelet for Blind Interaction) device is designed for visually impaired and blind children to wear on the wrist and produce sound based on the movement of the arm through space. The primary function is to inform a child (or adult) about his/her own movements to aid spatial cognition rehabilitation. However, the device could also be worn by friends and family and be used to inform the visually impaired person of others’ movement in the environment. In this paper, we describe an initial experiment that measured how well blindfolded sighted individuals could track a moving sound source in 2D horizontal space and then walk the same route to the same end position. Six sounds, including natural sounds, abstract sounds, Earcons and speech, were compared to identify which type of sound produced more accurate route recreation.

5/3/15 CHI Preview #3: Usability of Virtual Reality

A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays

by Mark McGill, Daniel Boland, Roderick Murray-Smith and Stephen Brewster

Abstract

We identify usability challenges facing consumers adopting Virtual Reality (VR) head-mounted displays (HMDs) in a survey of 108 VR HMD users. Users reported significant issues in interacting with, and being aware of their real-world context when using a HMD. Building upon existing work on blending real and virtual environments, we performed three design studies to address these usability concerns. In a typing study, we show that augmenting VR with a view of reality significantly corrected the performance impairment of typing in VR. We then investigated how much reality should be incorporated and when, so as to preserve users’ sense of presence in VR. For interaction with objects and peripherals, we found that selectively presenting reality as users engaged with it was optimal in terms of performance and users’ sense of presence. Finally, we investigated how this selective, engagement-dependent approach could be applied in social environments, to support the user’s awareness of the proximity and presence of others.

Video Preview

4/3/15 CHI Preview #2: In-Car Multimodal Displays

To Beep or Not to Beep? Comparing Abstract versus Language-based Multimodal Driver Displays

by Ioannis Politis, Stephen Brewster and Frank Pollick

Abstract

Multimodal displays are increasingly being utilized as driver warnings. Abstract warnings, without any semantic association to the signified event, and language-based warnings are examples of such displays. This paper presents a first comparison between these two types, across all combinations of audio, visual and tactile modalities. Speech, text and Speech Tactons (a novel form of tactile warnings synchronous to speech) were compared to abstract pulses in two experiments. Results showed that recognition times of warning urgency during a non-critical driving situation were shorter for abstract warnings, highly urgent warnings and warnings including visual feedback. Response times during a critical situation were shorter for warnings including audio. We therefore suggest abstract visual feedback when informing drivers during a non-critical situation and audio in a highly critical one. Language-based warnings during a critical situation performed equally well as abstract ones, so they are suggested as less annoying vehicle alerts.

Video Preview

2/3/15 CHI Preview #1: Thermal Feedback

In the Heat of the Moment: Subjective Interpretations of Thermal Feedback During Interaction

by Graham Wilson, Gavin Davidson and Stephen Brewster

Description

Thermal feedback can be an engaging and convincing means of conveying information, but its meaning in interaction can be ambiguous, so interface designers may not be sure how users naïvely interpret thermal feedback during interaction. The research in this paper measured subjective interpretations of thermal stimuli in four different scenarios: social media activity, a colleague’s presence, the extent of use of digital content and restaurant experiences and provides guidelines for the design of thermal feedback are presented to help others create effective thermal interfaces.

Abstract

Research has shown that thermal feedback can be an engaging and convincing means of conveying experimenter-predefined meanings, e.g., material properties or message types. However, thermal perception is subjective and its meaning in interaction can be ambiguous. Interface designers may not be sure how users could naïvely interpret thermal feedback during interaction. Little is also known about how users would choose thermal cues to convey their own meanings. The research in this paper tested subjective interpretations of thermal stimuli in three different scenarios: social media activity, a colleague’s presence and the extent of use of digital content. Participants were also asked to assign their own thermal stimuli to personal experiences, to help us understand what kinds of stimuli people associate with different meanings. The results showed strong agreement among participants concerning what warmth (presence, activity, quality) and cool mean (absence, poor quality). Guidelines for the design of thermal feedback are presented to help others create effective thermal interfaces.

Video Preview

18/02/15 Three Papers and a WiP at CHI ’15

We have had three full papers accepted for CHI ’15 in Seoul, Korea! Mark’s looks at techniques for brining reality into virtual reality, Graham’s describes a series of experiments on thermal output and Ioannis’ is about in-car multimodal notifications. Graham also has a WiP on recreating the trajectories of moving sound sources for ABBI. See you in Seoul!

19/11/14 Hiring an RA and PhD Student

We are currently looking for a Research Assistant and a PhD student for a three-year EU funded haptics project called HAPPINESS.

For more information please see job listings here or below:

  • Research Assistant / Associate: deadline 7th December 2014
  • PhD student: deadline 14th December 2014

Please note that both of these positions have now been filled.

11/11/14 ICMI in Istanbul

Euan will be at the International Conference on Multimodal Interaction (ICMI) in Istanbul to present some work from his PhD on tactile feedback for above-device gestures. For more information about this paper please see here.

20/09/14 MIG at NordiCHI ’14

Two of our group will be at NordiCHI ’14 in Helsinki next month. Euan is presenting a workshop paper at the Beyond the Switch workshop and Dong-Bach will be presenting a paper from his last institution and taking part in the Human-Technology Choreographies workshop.