Thursday, September 3, 2009

Hot off the presses! Sep 01 Exp Brain Res

The Sep 01 issue of the Exp Brain Res is now up on Pubget (About Exp Brain Res): if you're at a subscribing institution, just click the link in the latest link at the home page. (Note you'll only be able to get all the PDFs in the issue if your institution subscribes to Pubget.)

Latest Articles Include:

  • Crossmodal processing
    Spence C Senkowski D Röder B - Exp Brain Res 198(2-3):107-111 (2009)
  • Challenges in quantifying multisensory integration: alternative criteria, models, and inverse effectiveness
    Stein BE Stanford TR Ramachandran R Perrault TJ Rowland BA - Exp Brain Res 198(2-3):113-126 (2009)
    Single-neuron studies provide a foundation for understanding many facets of multisensory integration. These studies have used a variety of criteria for identifying and quantifying multisensory integration. While a number of techniques have been used, an explicit discussion of the assumptions, criteria, and analytical methods traditionally used to define the principles of multisensory integration is lacking. This was not problematic when the field was small, but with rapid growth a number of alternative techniques and models have been introduced, each with its own criteria and sets of implicit assumptions to define and characterize what is thought to be the same phenomenon. The potential for misconception prompted this reexamination of traditional approaches in order to clarify their underlying assumptions and analytic techniques. The objective here is to review and discuss traditional quantitative methods advanced in the study of single-neuron physiology in order to ap! preciate the process of multisensory integration and its impact.
  • Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions
    Royal DW Carriere BN Wallace MT - Exp Brain Res 198(2-3):127-136 (2009)
    Recent electrophysiology studies have suggested that neuronal responses to multisensory stimuli may possess a unique temporal signature. To evaluate this temporal dynamism, unisensory and multisensory spatiotemporal receptive fields (STRFs) of neurons in the cortex of the cat anterior ectosylvian sulcus were constructed. Analyses revealed that the multisensory STRFs of these neurons differed significantly from the component unisensory STRFs and their linear summation. Most notably, multisensory responses were found to have higher peak firing rates, shorter response latencies, and longer discharge durations. More importantly, multisensory STRFs were characterized by two distinct temporal phases of enhanced integration that reflected the shorter response latencies and longer discharge durations. These findings further our understanding of the temporal architecture of cortical multisensory processing, and thus provide important insights into the possible functional role(s! ) played by multisensory cortex in spatially directed perceptual processes.
  • Visual stimulus locking of EEG is modulated by temporal congruency of auditory stimuli
    Schall S Quigley C Onat S König P - Exp Brain Res 198(2-3):137-151 (2009)
    Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of neural activity to stimulus dynamics. Spectrotemporal analysis reveals locking to visual stimulus dynamics in both a broad alpha and the beta band. The properties of these effects suggest they are a correlate of bottom-up processing in the visual system. Waveform locking reveals two cortically distinct processes that lock to visual stimulus dynamics with differing topographies and time lags relative to the stimuli. Most importantly, these are modulated in strength by the congruency of an accompanying auditory stream. In addition, the waveform locking found at occipital electrodes shows an increase over stimulus dura! tion for visual and congruent audiovisual stimuli. Hence we argue that these effects reflect audiovisual interaction. We thus propose that spectrotemporal and waveform locking reflect different mechanisms involved in the processing of dynamic audiovisual stimuli.
  • Multisensory functional magnetic resonance imaging: a future perspective
    Goebel R van Atteveldt N - Exp Brain Res 198(2-3):153-164 (2009)
    Advances in functional magnetic resonance imaging (fMRI) technology and analytic tools provide a powerful approach to unravel how the human brain combines the different sensory systems. In this perspective, we outline promising future directions of fMRI to make optimal use of its strengths in multisensory research, and to meet its weaker sides by combining it with other imaging modalities and computational modeling.
  • Multisensory visual–tactile object related network in humans: insights gained using a novel crossmodal adaptation approach
    Tal N Amedi A - Exp Brain Res 198(2-3):165-182 (2009)
    Neuroimaging techniques have provided ample evidence for multisensory integration in humans. However, it is not clear whether this integration occurs at the neuronal level or whether it reflects areal convergence without such integration. To examine this issue as regards visuo-tactile object integration we used the repetition suppression effect, also known as the fMRI-based adaptation paradigm (fMR-A). Under some assumptions, fMR-A can tag specific neuronal populations within an area and investigate their characteristics. This technique has been used extensively in unisensory studies. Here we applied it for the first time to study multisensory integration and identified a network of occipital (LOtv and calcarine sulcus), parietal (aIPS), and prefrontal (precentral sulcus and the insula) areas all showing a clear crossmodal repetition suppression effect. These results provide a crucial first insight into the neuronal basis of visuo-haptic integration of objects in human! s and highlight the power of using fMR-A to study multisensory integration using non-invasinve neuroimaging techniques.
  • An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI
    Stevenson RA Kim S James TW - Exp Brain Res 198(2-3):183-194 (2009)
    It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative ! networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.
  • Audiovisual temporal capture underlies flash fusion
    Kawabe T - Exp Brain Res 198(2-3):195-208 (2009)
    When sequential visual flashes are accompanied by a lower number of sequential auditory pulses, the perceived number of visual flashes is lower than the actual number, an illusion termed 'flash fusion'. We examined whether temporal capture of flashes by pulses underlay flash fusion. One of the visual flashes was given a luminance increment, and observers reported which flash had the luminance increment. Results showed that the pulse strongly captured the flashes in its temporal vicinity, resulting in flash fusion. Moreover, when one of the successive pulses was given a higher frequency than others, the luminance increment was perceptually paired with the pulse with the higher frequency. The pairing of audiovisual features disappeared when the temporal pattern of the pulse frequency was difficult for the observer to anticipate. These data indicate that flash fusion is caused by temporal capture of flashes by the pulse, and that feature matching between auditory and ! visual signals also contributes to the modulation of perceived temporal structure of flashes during flash fusion.
  • Catch the moment: multisensory enhancement of rapid visual events by sound
    Chen YC Yeh SL - Exp Brain Res 198(2-3):209-219 (2009)
    Repetition blindness (RB) is a visual deficit, wherein observers fail to perceive the second occurrence of a repeated item in a rapid serial visual presentation stream. Chen and Yeh (Psychon Bull Rev 15:404–408, 2008) recently observed a reduction of the RB effect when the repeated items were accompanied by two sounds. The current study further manipulated the pitch of the two sounds (same versus different) in order to examine whether this cross-modal facilitation effect is caused by the multisensory enhancement of the visual event by sound, or multisensory Gestalt (perceptual grouping) of a new representation formed by combining the visual and auditory inputs. The results showed robust facilitatory effects of sound on RB regardless of the pitch of the sounds (Experiment 1), despite an effort to further increase the difference in pitch (Experiment 2). Experiment 3 revealed a close link between participants' awareness of pitch and the effect of pitch on the RB effec! t. We conclude that the facilitatory effect of sound on RB results from multisensory enhancement of the perception of visual events by auditory signals.
  • Perceived timing of vestibular stimulation relative to touch, light and sound
    Barnett-Cowan M Harris LR - Exp Brain Res 198(2-3):221-231 (2009)
    Different senses have different processing times. Here we measured the perceived timing of galvanic vestibular stimulation (GVS) relative to tactile, visual and auditory stimuli. Simple reaction times for perceived head movement (438 ± 49 ms) were significantly longer than to touches (245 ± 14 ms), lights (220 ± 13 ms), or sounds (197 ± 13 ms). Temporal order and simultaneity judgments both indicated that GVS had to occur about 160 ms before other stimuli to be perceived as simultaneous with them. This lead was significantly less than the relative timing predicted by reaction time differences compatible with an incomplete tendency to compensate for differences in processing times.
  • Stimulus duration influences perceived simultaneity in audiovisual temporal-order judgment
    Boenke LT Deliano M Ohl FW - Exp Brain Res 198(2-3):233-244 (2009)
    The temporal integration of stimuli in different sensory modalities plays a crucial role in multisensory processing. Previous studies using temporal-order judgments to determine the point of subjective simultaneity (PSS) with multisensory stimulation yielded conflicting results on modality-specific delays. While it is known that the relative stimulus intensities of stimuli from different sensory modalities affect their perceived temporal order, we have hypothesized that some of these discrepancies might be explained by a previously overlooked confounding factor, namely the duration of the stimulus. We therefore studied the influence of both factors on the PSS in a spatial-audiovisual temporal-order task. In addition to confirming previous results on the role of stimulus intensity, we report that varying the temporal duration of an audiovisual stimulus pair also affects the perceived temporal order of the auditory and visual stimulus components. Although individual PSS ! values varied from negative to positive values across participants, we found a systematic shift of PSS values in all participants toward a common attractor value with increasing stimulus duration. This resulted in a stabilization of PSS values with increasing stimulus duration, indicative of a mechanism that compensates individual imbalances between sensory modalities, which might arise from attentional biases toward one modality at short stimulus durations.
  • Audio–tactile superiority over visuo–tactile and audio–visual combinations in the temporal resolution of synchrony perception
    Fujisaki W Nishida S - Exp Brain Res 198(2-3):245-259 (2009)
    To see whether there is a difference in temporal resolution of synchrony perception between audio–visual (AV), visuo–tactile (VT), and audio–tactile (AT) combinations, we compared synchrony–asynchrony discrimination thresholds of human participants. Visual and auditory stimuli were, respectively, a luminance-modulated Gaussian blob and an amplitude-modulated white noise. Tactile stimuli were mechanical vibrations presented to the index finger. All the stimuli were temporally modulated by either single pulses or repetitive-pulse trains. The results show that the temporal resolution of synchrony perception was similar for AV and VT (e.g., ~4 Hz for repetitive-pulse stimuli), but significantly higher for AT (~10 Hz). Apart from having a higher temporal resolution, however, AT synchrony perception was similar to AV synchrony perception in that participants could select matching features through attention, and a change in the matching-feature attribute had little ef! fect on temporal resolution. The AT superiority in temporal resolution was indicated not only by synchrony–asynchrony discrimination but also by simultaneity judgments. Temporal order judgments were less affected by modality combination than the other two tasks.
  • Prevalence, characteristics and a neurocognitive model of mirror-touch synaesthesia
    Banissy MJ Cohen Kadosh R Maus GW Walsh V Ward J - Exp Brain Res 198(2-3):261-272 (2009)
    In so-called 'mirror-touch synaesthesia', observing touch to another person induces a subjective tactile sensation on the synaesthete's own body. It has been suggested that this type of synaesthesia depends on increased activity in neural systems activated when observing touch to others. Here we report the first study on the prevalence of this variant of synaesthesia. Our findings indicate that this type of synaesthesia is just as common, if not more common than some of the more frequently studied varieties of synaesthesia such as grapheme-colour synaesthesia. Additionally, we examine behavioural correlates associated with the condition. In a second experiment, we show that synaesthetic experiences are not related to somatotopic cueing—a flash of light on an observed body part does not elicit the behavioural or subjective characteristics of synaesthesia. Finally, we propose a neurocognitive model to account for these characteristics and discuss the implications! of our findings for general theories of synaesthesia.
  • Preservation of crossmodal selective attention in healthy aging
    Hugenschmidt CE Peiffer AM McCoy TP Hayasaka S Laurienti PJ - Exp Brain Res 198(2-3):273-285 (2009)
    The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can ! successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing.
  • The role of attention on the integration of visual and inertial cues
    Berger DR Bülthoff HH - Exp Brain Res 198(2-3):287-300 (2009)
    The extent to which attending to one stimulus while ignoring another influences the integration of visual and inertial (vestibular, somatosensory, proprioceptive) stimuli is currently unknown. It is also unclear how cue integration is affected by an awareness of cue conflicts. We investigated these questions using a turn-reproduction paradigm, where participants were seated on a motion platform equipped with a projection screen and were asked to actively return a combined visual and inertial whole-body rotation around an earth-vertical axis. By introducing cue conflicts during the active return and asking the participants whether they had noticed a cue conflict, we measured the influence of each cue on the response. We found that the task instruction had a significant effect on cue weighting in the response, with a higher weight assigned to the attended modality, only when participants noticed the cue conflict. This suggests that participants used task-induced attentio! n to reduce the influence of stimuli that conflict with the task instructions.
  • Action preparation enhances the processing of tactile targets
    Juravle G Deubel H - Exp Brain Res 198(2-3):301-311 (2009)
    We present two experiments in which we investigated whether tactile attention is modulated by action preparation. In Experiment 1, participants prepared a saccade toward either the left or right index finger, depending on the pitch of a non-predictive auditory cue. In Experiment 2, participants prepared to lift the left or right index finger in response to the auditory cue. In half of the trials in both experiments, a suprathreshold vibratory stimulus was presented with equal probability to either finger, to which the participants made a speeded foot response. The results showed facilitation in the processing of targets delivered at the goal location of the prepared movement (Experiment 1), as well as at the effector of the prepared movement (Experiment 2). These results are discussed within the framework of theories on motor preparation and spatial attention.
  • Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli
    Talsma D Senkowski D Woldorff MG - Exp Brain Res 198(2-3):313-328 (2009)
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This ! was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the! auditory and visual stimulus components has been established.
  • Perceptual learning of view-independence in visuo-haptic object representations
    Lacey S Pappas M Kreps A Lee K Sathian K - Exp Brain Res 198(2-3):329-337 (2009)
    We previously showed that cross-modal recognition of unfamiliar objects is view-independent, in contrast to view-dependence within-modally, in both vision and haptics. Does the view-independent, bisensory representation underlying cross-modal recognition arise from integration of unisensory, view-dependent representations or intermediate, unisensory but view-independent representations? Two psychophysical experiments sought to distinguish between these alternative models. In both experiments, participants began from baseline, within-modal, view-dependence for object recognition in both vision and haptics. The first experiment induced within-modal view-independence by perceptual learning, which was completely and symmetrically transferred cross-modally: visual view-independence acquired through visual learning also resulted in haptic view-independence and vice versa. In the second experiment, both visual and haptic view-dependence were transformed to view-independence b! y either haptic-visual or visual-haptic cross-modal learning. We conclude that cross-modal view-independence fits with a model in which unisensory view-dependent representations are directly integrated into a bisensory, view-independent representation, rather than via intermediate, unisensory, view-independent representations.
  • Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony
    Petrini K Dahl S Rocchesso D Waadeland CH Avanzini F Puce A Pollick FE - Exp Brain Res 198(2-3):339-352 (2009)
    We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos × three accents × nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions × nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in! stimulus characteristics that otherwise would affect our multisensory integration processes.
  • Specificity of auditory-guided visual perceptual learning suggests crossmodal plasticity in early visual cortex
    Beer AL Watanabe T - Exp Brain Res 198(2-3):353-361 (2009)
    Sounds modulate visual perception. Blind humans show altered brain activity in early visual cortex. However, it is still unclear whether crossmodal activity in visual cortex results from unspecific top-down feedback, a lack of visual input, or genuinely reflects crossmodal interactions at early sensory levels. We examined how sounds affect visual perceptual learning in sighted adults. Visual motion discrimination was tested prior to and following eight sessions in which observers were exposed to irrelevant moving dots while detecting sounds. After training, visual discrimination improved more strongly for motion directions that were paired with a relevant sound during training than for other directions. Crossmodal learning was limited to visual field locations that overlapped with the sound source and was little affected by attention. The specificity and automatic nature of these learning effects suggest that sounds automatically guide visual plasticity at a relatively! early level of processing.
  • Gamma-band activity reflects multisensory matching in working memory
    Senkowski D Schneider TR Tandler F Engel AK - Exp Brain Res 198(2-3):363-372 (2009)
    In real-world situations, the integration of sensory information in working memory (WM) is an important mechanism for the recognition of objects. Studies in single sensory modalities show that object recognition is facilitated if bottom-up inputs match a template held in WM, and that this effect may be linked to enhanced synchronization of neurons in the gamma-band (>30 Hz). Natural objects, however, frequently provide inputs to multiple sensory modalities. In this EEG study, we examined the integration of semantically matching or non-matching visual and auditory inputs using a delayed visual-to-auditory object-matching paradigm. In the event-related potentials (ERPs) triggered by auditory inputs, effects of semantic matching were observed after 120–170 ms at frontal and posterior regions, indicating WM-specific processing across modalities, and after 250–400 ms over medial-central regions, possibly reflecting the contextual integration of sensory inputs. Additiona! lly, total gamma-band activity (GBA) with medial-central topography after 120–180 ms was larger for matching compared to non-matching trials. This demonstrates that multisensory matching in WM is reflected by GBA and that dynamic coupling of neural populations in this frequency range might be a crucial mechanism for integrative multisensory processes.
  • Gender bending: auditory cues affect visual judgements of gender in biological motion displays
    van der Zwan R Machatch C Kozlowski D Troje NF Blanke O Brooks A - Exp Brain Res 198(2-3):373-382 (2009)
    The movement of an organism typically provides an observer with information in more than one sensory modality. The integration of information modalities reduces the likelihood that the observer will be confronted with a scene that is perceptually ambiguous. With that in mind, observers were presented with a series of point-light walkers each of which varied in the strength of the gender information they carried. Presenting those stimuli with auditory walking sequences containing ambiguous gender information had no effect on observers' ratings of visually perceived gender. When the visual stimuli were paired with auditory cues that were unambiguously female, observers' judgments of walker gender shifted such that ambiguous walkers were judged to look more female. To show that this is a perceptual rather than a cognitive effect, we induced visual gender after-effects with and without accompanying female auditory cues. The pairing of gender-neutral visual stimuli with! unambiguous female auditory cues during adaptation elicited male after-effects. These data suggest that biological motion processing mechanisms can integrate auditory and visual cues to facilitate the extraction of higher-order features like gender. Possible neural substrates are discussed.
  • Neural correlates of audiovisual motion capture
    Stekelenburg JJ Vroomen J - Exp Brain Res 198(2-3):383-390 (2009)
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory process! ing stages.
  • Spatially congruent visual motion modulates activity of the primary auditory cortex
    Zvyagintsev M Nikolaev AR Thönnessen H Sachs O Dammers J Mathiak K - Exp Brain Res 198(2-3):391-402 (2009)
    We investigated the brain responses to the transitions from the static to moving audiovisual stimuli using magnetoencephalography. The spatially congruent auditory and visual stimuli moved in the same direction whereas the incongruent stimuli moved in the opposite directions. Using dipole modeling we found that the static-to-moving transitions evoked a neural response in the primary auditory cortex bilaterally. The response started about 100 ms after the motion onset from a negative component (mvN1) and lasted during the entire interval of the stimulus motion. The mvN1 component was similar to the classical auditory N1 response to the static sound, but had smaller amplitude and later latency. The coordinates of the mvN1 and N1 dipoles in the primary auditory cortex were also similar. The amplitude of the auditory response to the moving stimuli appears to be sensitive to spatial congruency of the audiovisual motion; it was larger in the incongruent than congruent condit! ion. This is evidence that the moving visual stimuli modulate the early sensory activity in the primary auditory cortex. Such early audiovisual integration may be specific for motion processing.
  • Eye position affects the perceived location of touch
    Harrar V Harris LR - Exp Brain Res 198(2-3):403-410 (2009)
    Here, we demonstrate a systematic shift in the perceived location of a tactile stimulus on the arm toward where the eye is looking. Participants reported the perceived position of touches presented between the elbow and the wrist while maintaining eye positions at various eccentricities. The perceived location of the touch was shifted by between 1 and 5 cm (1.9°–9.5° visual angle) by a change in eye position of ±25° from straight ahead. In a control condition, we repeat the protocol with the eyes fixating straight ahead. Changes in attention accounted for only 17% of the shift due to eye position. The pattern of tactile shifts due to eye position was comparable whether or not the arm was visible. However, touches at locations along the forearm were perceived as being farther apart when the arm was visible compared to when it was covered. These results are discussed in terms of the coding of tactile space, which seems to require integration of tactile, visual and ! eye position information.
  • Perisaccadic localization of auditory stimuli
    Klingenhoefer S Bremmer F - Exp Brain Res 198(2-3):411-423 (2009)
    Interaction with the outside world requires the knowledge about where objects are with respect to one's own body. Such spatial information is represented in various topographic maps in different sensory systems. From a computational point of view, however, a single, modality-invariant map of the incoming sensory signals appears to be a more efficient strategy for spatial representations. If such a single supra-modal map existed and were used for perceptual purposes, localization characteristics should be similar across modalities. Previous studies had shown mislocalization of brief visual stimuli presented in the temporal vicinity of saccadic eye-movements. Here, we tested, if such mislocalizations could also be found for auditory stimuli. We presented brief noise bursts before, during, and after visually guided saccades. Indeed, we found localization errors for these auditory stimuli. The spatio-temporal pattern of this mislocalization, however, clearly differed fro! m the one found for visual stimuli. The spatial error also depended on the exact type of eye-movement (visually guided vs. memory guided saccades). Finally, results obtained in fixational control paradigms under different conditions suggest that auditory localization can be strongly influenced by both static and dynamic visual stimuli. Visual localization on the other hand is not influenced by distracting visual stimuli but can be inaccurate in the temporal vicinity of eye-movements. Taken together, our results argue against a single, modality-independent spatial representation of sensory signals.
  • The effect of spatial–temporal audiovisual disparities on saccades in a complex scene
    Van Wanrooij MM Bell AH Munoz DP Van Opstal AJ - Exp Brain Res 198(2-3):425-437 (2009)
    In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438–454, 2002). In those experiments both stimulus modalities belonged to the same object, and subjects were instructed to foveate that source, irrespective of modality. Under natural conditions, however, subjects have no prior knowledge as to whether visual and auditory events originated from the same, or from different objects in space and time. In the present experiments we included these possibilities by introducing various spatial and temporal disparities between the visual and auditory events within the AV-background. Subjects had to orient fast and accurately to the visual target, thereby ignoring the auditory distractor. We show that this task belies a dichotomy, as it was quite difficult to produce fast responses (<250 m! s) that were not aurally driven. Subjects therefore made many erroneous saccades. Interestingly, for the spatially aligned events the inability to ignore auditory stimuli produced shorter reaction times, but also more accurate responses than for the unisensory target conditions. These findings, which demonstrate effective multisensory integration, are similar to the previous study, and the same multisensory integration rules are applied (Corneil et al. in J Neurophysiol 88:438–454, 2002). In contrast, with increasing spatial disparity, integration gradually broke down, as the subjects' responses became bistable: saccades were directed either to the auditory (fast responses), or to the visual stimulus (late responses). Interestingly, also in this case responses were faster and more accurate than to the respective unisensory stimuli.

No comments: