E. Pekkola (chair); T. Senan; J. Rahm; E. Edgerton; N. Halin. Location: A7

Recall of spoken words in English and Swedish heard at different signal-to-noise ratios and different reverberation times - Children aged 10-11 years

A. Hurtig, M. Keus van de Poll1, E. P. Pekkola, & S. Hygge

Noise impairs speech perception which in turn makes memory and learning more difficult. School children are expected to be particularly vulnerable to the negative effects of noise. In this study we varied reverberation time (RT) and signal-to-noise ratio (SNR) to see how they affected recall of words in Swedish (native tongue) and English. Participants were 72 children in the fourth grade who listened to wordlists presented in Swedish and English with broadband noise in the background. We compared two reverberation time (RT) conditions: a short RT (0.3 sec.) and a long RT (1.2 sec.), and two signal-to-noise (SNR) conditions: a low SNR (+3 dB) and a high SNR (+12 dB). Each wordlist had 8 words to be recalled. Main effects of language and SNR were found. Children could recall fewer words if they were presented in English or had a low SNR. Interactions were found between Language, RT, SNR and whether the words were at the beginning, in the middle or at the end of the wordlists. Recall performance was best with a short RT and a high SNR. Fourth graders recalled more words in their native language compared to English. Children might have difficulties with semantic association and understanding the meaning of words in English. Recall performance was markedly improved with good listening conditions, which indicates that there is something to be gained by improving the acoustical conditions in a classroom to improve memory and learning.

Investigating and Modelling Distortions of Cognitive Processes by Environmental Sounds

T. Senan, A. Kohlrausch, & S. Jelfs, M. H. Park

The distractive effects of extraneous sounds on cognitive processes have been investigated in the paradigm of “irrelevant sound”. The “irrelevant sound (speech) effect” (ISE) can be quantified by comparing the scores of memory-recall tests under different acoustic conditions in relation to a silence condition. The ISE is often explained by the changing-state hypothesis: the automatic segmentation of successive sound tokens and the changes in spectro-temporal properties of successive tokens. The present project investigates the relation between spectral (frequency domain correlation coefficient, FDCC) and temporal (average modulation transfer function, AMTF) sound features and the ISE, by synthesizing sounds with dedicated properties along these two dimensions. A first memory-recall experiment was carried out in order to determine the predictive value of these two estimators: The data showed that there was no significant performance change for any of the synthesized acoustic conditions. One possible explanation for this result might lie in the regular structure of the stimuli, which   made them very different from speech stimuli and prevented creating an ISE. This shortcoming will be addressed in a new experiment by defining a stimulus, which has enough speech-like properties to create an ISE while enabling to modify temporal and spectral features independently.

From facial recognition to the recognition of facial expressions: a full-scale laboratory study

J. Rahm & M. Johansson

Outdoor lighting makes public space accessible after dark and aims to improve the safety of pedestrians. A systematic literature review shows that facial recognition is the most relevant visual task for judging the potential threat of other pedestrians. In the identified literature it has been operationalized as the task of recognising gender; guessing identity or seeing facial features. However, facial expressions might be of greater importance for perceived safety. As a subsequent step to the literature review, recognition of facial expressions was explored in a full-scale laboratory containing a pathway and two luminaires (18 m apart). The participants (n=91, age: 20-75, 52% women) walked along the footpath under three different lighting designs (2 LEDs and 1 CMH) and stopped when they could make out the expression in a photograph of a woman’s face (175x200 mm; placed at the height of 1.65 m). The facial expressions depicted; (fear, surprise and anger, from P. Ekman’s Emotions Revealed photo set) were assessed on 11 emotions graded by a 5-point Likert scale. Data will be statistically analysed to identify any differences in recognised facial expressions due to variations in illumination, scotopic/photopic-ratios and glare. The outcome will inform lighting designs for pedestrians in urban areas.

Understanding the long-term impact of new school environments on secondary school teachers

Edgerton, J. McKechnie, & S. McEwen

Over the last 10 years, Scotland (like the rest of the UK) has experienced an unprecedented level of investment in its school estate. The majority of this investment has been focused on secondary schools (students aged 12-17). The impact of new school buildings is poorly understood and there is a lack of an empirical evidence base on the relationship between school buildings and their users.  This paper will focus on a longitudinal study that we conducted to investigate the impact of new secondary school buildings from the perspective of one important user group namely, teachers. This study collected data from teachers at six secondary schools in Central Scotland that were part of a rebuilding programme. Data was collected at four different points in time over an 8 year period (from pre to post construction). The data consisted of measures of behaviour, self-esteem, work self-perceptions, job satisfaction and perceptions of the school environment. The presentation will focus primarily on the data from the final phase of data collection i.e. four years after the new schools had been completed. The findings will be discussed in terms of the impact of new school buildings on a range of teacher-related outcomes and whether this impact endures long after the new schools have been built.

Higher Task Difficulty Shields Against Background Speech

N. Halin, J. E. Marsh, & P. Sörqvist

Performance on visual-verbal tasks is generally impaired by task-irrelevant background speech, which can have consequences for individuals that work in noisy environments (e.g., schools or offices). This study examined the role that increased task difficulty plays in shielding against the effects of background speech. This issue was addressed across 4 experiments whereby the level of task difficulty on visual-verbal tasks was manipulated (e.g., by changing the font of a text to one that is harder to read). Experiments 1 to 3 qualified the general finding: that background speech impairs performance on visual-verbal tasks (proofreading and prose memory), but only when task difficulty was low, not when it was high. Moreover, Experiment 4 demonstrates that higher task difficulty on the focal task (n-back) also reduces recall on a surprise memory test for the content of a to-be-ignored background story. These results suggest that an increase in task difficulty, which promotes greater task engagement, can shield against the detrimental effects of background speech possibly through constraining the processing of complex semantic information present within the background speech.