Multimodal displays are increasingly used in cars, introducing the challenge of how to effectively alert the driver, without distracting the primary task of driving. Although such new ways of providing information involving the audio, visual and tactile modalities are currently available by many automotive manufacturers, there is space in recognizing good ways to use these modalities in favor of the drivers, without overloading them with information and increasing risk. Therefore, this work aims to investigate multimodal cues, taking into account the urgency of the situation, as well as parameters related to the environment and the driver, and create an algorithm that will decide which modalities are best and when, based on the above information.
This project is partly funded by Freescale Semiconductor Inc.
For more information, contact: firstname.lastname@example.org