首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
The perception of heading during eye movements.   总被引:5,自引:0,他引:5  
C S Royden  M S Banks  J A Crowell 《Nature》1992,360(6404):583-585
When a person walks through a rigid environment while holding eyes and head fixed, the pattern of retinal motion flows radially away from a point, the focus of expansion (Fig. 1a). Under such conditions of translation, heading corresponds to the focus of expansion and people identify it readily. But when making an eye/head movement to track an object off to the side, retinal motion is no longer radial (Fig. 1b). Heading perception in such situations has been modelled in two ways. Extra-retinal models monitor the velocity of rotational movements through proprioceptive or efference information from the extraocular and neck muscles and use that information to discount rotation effects. Retinal-image models determine (and eliminate) rotational components from the retinal image alone. These models have been tested by measuring heading perception under two conditions. First, observers judged heading while tracking a point on a simulated ground plane. Second, they fixated a stationary point and the flow field simulated the effects of a tracking eye movement. Extra-retinal models predict poorer performance in the simulated condition because the eyes do not move. Retinal-image models predict no difference in performance because the two conditions produce identical patterns of retinal motion. Warren and Hannon observed similar performance and concluded that people do not require extra-retinal information to judge heading with eye/head movements present, but they used extremely slow tracking eye movements of 0.2-1.2 deg s-1; a moving observer frequently tracks objects at much higher rates (L. Stark, personal communication). Here we examine heading judgements at higher, more typical eye movement velocities and find that people require extra-retinal information about eye position to perceive heading accurately under many viewing conditions.  相似文献   

2.
Olveczky BP  Baccus SA  Meister M 《Nature》2003,423(6938):401-408
An important task in vision is to detect objects moving within a stationary scene. During normal viewing this is complicated by the presence of eye movements that continually scan the image across the retina, even during fixation. To detect moving objects, the brain must distinguish local motion within the scene from the global retinal image drift due to fixational eye movements. We have found that this process begins in the retina: a subset of retinal ganglion cells responds to motion in the receptive field centre, but only if the wider surround moves with a different trajectory. This selectivity for differential motion is independent of direction, and can be explained by a model of retinal circuitry that invokes pooling over nonlinear interneurons. The suppression by global image motion is probably mediated by polyaxonal, wide-field amacrine cells with transient responses. We show how a population of ganglion cells selective for differential motion can rapidly flag moving objects, and even segregate multiple moving objects.  相似文献   

3.
M A Goodale  D Pelisson  C Prablanc 《Nature》1986,320(6064):748-750
When we reach towards an object that suddenly appears in our peripheral visual field, not only does our arm extend towards the object, but our eyes, head and body also move in such a way that the image of the object falls on the fovea. Popular models of how reaching movements are programmed have argued that while the first part of the limb movement is ballistic, subsequent corrections to the trajectory are made on the basis of dynamic feedback about the relative positions of the hand and the target provided by central vision. These models have assumed that the adjustments are dependent on seeing the hand moving with respect to the target. Here we present evidence that a change in the position of a visual target during a reaching movement can modify the trajectory even when vision of the hand is prevented. Moreover, these dynamic corrections to the trajectory of the moving limb occur without the subject perceiving the change in target location. These findings demonstrate that visual feedback about the relative position of the hand and target is not necessary for visually driven corrections in reaching to occur, and the mechanisms that maintain the apparent stability of a target in space are dissociable from those that mediate the visuomotor output directed at that target.  相似文献   

4.
Through the development of a high-acuity fovea, primates with frontal eyes have acquired the ability to use binocular eye movements to track small objects moving in space. The smooth-pursuit system moves both eyes in the same direction to track movement in the frontal plane (frontal pursuit), whereas the vergence system moves left and right eyes in opposite directions to track targets moving towards or away from the observer (vergence tracking). In the cerebral cortex and brainstem, signals related to vergence eye movements--and the retinal disparity and blur signals that elicit them--are coded independently of signals related to frontal pursuit. Here we show that these types of signal are represented in a completely different way in the smooth-pursuit region of the frontal eye fields. Neurons of the frontal eye field modulate strongly during both frontal pursuit and vergence tracking, which results in three-dimensional cartesian representations of eye movements. We propose that the brain creates this distinctly different intermediate representation to allow these neurons to function as part of a system that enables primates to track and manipulate objects moving in three-dimensional space.  相似文献   

5.
Tanaka M  Lisberger SG 《Nature》2001,409(6817):191-194
In studies of the neural mechanisms giving rise to behaviour, changes in the neural and behavioural responses produced by a given stimulus have been widely reported. This 'gain control' can boost the responses to sensory inputs that are particularly relevant, select among reflexes for execution by motoneurons or emphasize specific movement targets. Gain control is also an integral part of the smooth-pursuit eye movement system. One signature of gain control is that a brief perturbation of a stationary target during fixation causes tiny eye movements, whereas the same perturbation of a moving target during the active state of accurate pursuit causes large responses. Here we show that electrical stimulation of the smooth-pursuit eye movement region in the arcuate sulcus of the frontal lobe ('the frontal pursuit area', FPA) mimics the active state of pursuit. Such stimulation enhances the response to a brief perturbation of target motion, regardless of the direction of motion. We postulate that the FPA sets the gain of pursuit, thereby participating in target selection for pursuit.  相似文献   

6.
Representation of a perceptual decision in developing oculomotor commands   总被引:15,自引:0,他引:15  
Gold JI  Shadlen MN 《Nature》2000,404(6776):390-394
Behaviour often depends on the ability to make categorical judgements about sensory information acquired over time. Such judgements require a comparison of the evidence favouring the alternatives, but how the brain forms these comparisons is unknown. Here we show that in a visual discrimination task, the accumulating balance of sensory evidence favouring one interpretation over another is evident in the neural circuits that generate the behavioural response. We trained monkeys to make a direction judgement about dynamic random-dot motions and to indicate their judgement with an eye movement to a visual target. We interrupted motion viewing with electrical microstimulation of the frontal eye field and analysed the resulting, evoked eye movements for evidence of ongoing activity associated with the oculomotor response. Evoked eye movements deviated in the direction of the monkey's judgement. The magnitude of the deviation depended on motion strength and viewing time. The oculomotor signals responsible for these deviations reflected the accumulated motion information that informed the monkey's choices on the discrimination task. Thus, for this task, decision formation and motor preparation appear to share a common level of neural organization.  相似文献   

7.
为了研究飞行员在使用平视显示器执行不同飞行任务时的行为模式,提出了一种包括飞行员眼动、头部运动和手部运动多种特征的行为识别框架。首先,开展行为模式研究实验,通过眼动仪获取眼部运动和头部运动,通过基于视频的手动跟踪获取手部运动。之后采用实验得到的结果对模型进行训练和测试。最后,对比了条件随机场和隐动态条件随机场在不同特征下的识别效果。结果表明,采用眼动特征加手部特征时,隐动态条件随机场模型对不同飞行任务的识别效果较好。  相似文献   

8.
Niemeier M  Crawford JD  Tweed DB 《Nature》2003,422(6927):76-80
We scan our surroundings with quick eye movements called saccades, and from the resulting sequence of images we build a unified percept by a process known as transsaccadic integration. This integration is often said to be flawed, because around the time of saccades, our perception is distorted and we show saccadic suppression of displacement (SSD): we fail to notice if objects change location during the eye movement. Here we show that transsaccadic integration works by optimal inference. We simulated a visuomotor system with realistic saccades, retinal acuity, motion detectors and eye-position sense, and programmed it to make optimal use of these imperfect data when interpreting scenes. This optimized model showed human-like SSD and distortions of spatial perception. It made new predictions, including tight correlations between perception and motor action (for example, more SSD in people with less-precise eye control) and a graded contraction of perceived jumps; we verified these predictions experimentally. Our results suggest that the brain constructs its evolving picture of the world by optimally integrating each new piece of sensory or motor information.  相似文献   

9.
Direct visuomotor transformations for reaching   总被引:27,自引:0,他引:27  
Buneo CA  Jarvis MR  Batista AP  Andersen RA 《Nature》2002,416(6881):632-636
The posterior parietal cortex (PPC) is thought to have a function in the sensorimotor transformations that underlie visually guided reaching, as damage to the PPC can result in difficulty reaching to visual targets in the absence of specific visual or motor deficits. This function is supported by findings that PPC neurons in monkeys are modulated by the direction of hand movement, as well as by visual, eye position and limb position signals. The PPC could transform visual target locations from retinal coordinates to hand-centred coordinates by combining sensory signals in a serial manner to yield a body-centred representation of target location, and then subtracting the body-centred location of the hand. We report here that in dorsal area 5 of the PPC, remembered target locations are coded with respect to both the eye and hand. This suggests that the PPC transforms target locations directly between these two reference frames. Data obtained in the adjacent parietal reach region (PRR) indicate that this transformation may be achieved by vectorially subtracting hand location from target location, with both locations represented in eye-centred coordinates.  相似文献   

10.
Optimal eye movement strategies in visual search   总被引:2,自引:0,他引:2  
Najemnik J  Geisler WS 《Nature》2005,434(7031):387-391
To perform visual search, humans, like many mammals, encode a large field of view with retinas having variable spatial resolution, and then use high-speed eye movements to direct the highest-resolution region, the fovea, towards potential target locations. Good search performance is essential for survival, and hence mammals may have evolved efficient strategies for selecting fixation locations. Here we address two questions: what are the optimal eye movement strategies for a foveated visual system faced with the problem of finding a target in a cluttered environment, and do humans employ optimal eye movement strategies during a search? We derive the ideal bayesian observer for search tasks in which a target is embedded at an unknown location within a random background that has the spectral characteristics of natural scenes. Our ideal searcher uses precise knowledge about the statistics of the scenes in which the target is embedded, and about its own visual system, to make eye movements that gain the most information about target location. We find that humans achieve nearly optimal search performance, even though humans integrate information poorly across fixations. Analysis of the ideal searcher reveals that there is little benefit from perfect integration across fixations--much more important is efficient processing of information on each fixation. Apparently, evolution has exploited this fact to achieve efficient eye movement strategies with minimal neural resources devoted to memory.  相似文献   

11.
Miniature eye movements enhance fine spatial detail   总被引:1,自引:0,他引:1  
Rucci M  Iovin R  Poletti M  Santini F 《Nature》2007,447(7146):851-854
Our eyes are constantly in motion. Even during visual fixation, small eye movements continually jitter the location of gaze. It is known that visual percepts tend to fade when retinal image motion is eliminated in the laboratory. However, it has long been debated whether, during natural viewing, fixational eye movements have functions in addition to preventing the visual scene from fading. In this study, we analysed the influence in humans of fixational eye movements on the discrimination of gratings masked by noise that has a power spectrum similar to that of natural images. Using a new method of retinal image stabilization, we selectively eliminated the motion of the retinal image that normally occurs during the intersaccadic intervals of visual fixation. Here we show that fixational eye movements improve discrimination of high spatial frequency stimuli, but not of low spatial frequency stimuli. This improvement originates from the temporal modulations introduced by fixational eye movements in the visual input to the retina, which emphasize the high spatial frequency harmonics of the stimulus. In a natural visual world dominated by low spatial frequencies, fixational eye movements appear to constitute an effective sampling strategy by which the visual system enhances the processing of spatial detail.  相似文献   

12.
Wexler M  Panerai F  Lamouret I  Droulez J 《Nature》2001,409(6816):85-88
One of the ways that we perceive shape is through seeing motion. Visual motion may be actively generated (for example, in locomotion), or passively observed. In the study of the perception of three-dimensional structure from motion, the non-moving, passive observer in an environment of moving rigid objects has been used as a substitute for an active observer moving in an environment of stationary objects; this 'rigidity hypothesis' has played a central role in computational and experimental studies of structure from motion. Here we show that this is not an adequate substitution because active and passive observers can perceive three-dimensional structure differently, despite experiencing the same visual stimulus: active observers' perception of three-dimensional structure depends on extraretinal information about their own movements. The visual system thus treats objects that are stationary (in an allocentric, earth-fixed reference frame) differently from objects that are merely rigid. These results show that action makes an important contribution to depth perception, and argue for a revision of the rigidity hypothesis to incorporate the special case of stationary objects.  相似文献   

13.
H M Cooper  M Magnin 《Nature》1986,324(6096):457-459
The accessory optic system (AOS), which was described as early as 1870 by Gudden, constitutes a distinct midbrain visual pathway in all classes of vertebrates. In non-primate mammals, retinal fibres of this system project to a set of three nuclei: the dorsal (DTN), the lateral (LTN) and the medial (MTN) terminal nuclei. Whereas all AOS cells respond to the slow motion of large visual stimuli, the neurons are tuned to complementary directions of movement: horizontal temporo-nasal direction for the DTN, vertical up and down for the LTN and vertical down for the MTN. It has thus been suggested that these nuclei establish a system of retinal coordinates for the detection of whole field motion. As the AOS provides direct and indirect pathways to both oculomotor and vestibular structures, each of these nuclei is thought to be an essential link in the co-ordination of eye and head movements in relation to movement within the visual-field. One problem for the generalization of this theory is that the medial terminal nucleus has never been found in primates. In this report we establish both the existence of this nucleus and its afferent input from the retina in all major groups of primates (prosimians, New and Old World monkeys and apes), indicating a common anatomical plan of organization of the AOS in mammals.  相似文献   

14.
Predictable eye-head coordination during driving.   总被引:1,自引:0,他引:1  
M F Land 《Nature》1992,359(6393):318-320
Large changes in the direction of gaze are made with a combination of fast saccadic eye movements and rather slower head movements. Since the first study on freely moving subjects, most authors have agreed that the head movement component of gaze is very variable, with a high 'volitional' component. But in some circumstances head and eye movements can be quite predictable, for example when a subject is asked to shift gaze as quickly as possible. Under these conditions, laboratory studies have shown that the eye and head motor-systems both receive gaze-change commands, although they execute them in rather different ways. Here I reconsider the way gaze direction is changed during free movement, but in the performance of a task where the subject is too busy to exert conscious control over head or eye movements. Using a new portable and inexpensive method for recording head and eye movements, I examine the oculomotor behaviour of car drivers, particularly during the large gaze changes made at road junctions. The results show that the pattern of eye and head movements is highly predictable, given only the sequence of gaze targets.  相似文献   

15.
Parallel processing of motion and colour information   总被引:1,自引:0,他引:1  
T Carney  M Shadlen  E Switkes 《Nature》1987,328(6131):647-649
When the two eyes are confronted with sufficiently different versions of the visual environment, one or the other eye dominates perception in alternation. A similar situation may be created in the laboratory by presenting images to the left and right eyes which differ in orientation or colour. Although perception is dominated by one eye during rivalry, there are a number of instances in which visual processes nevertheless continue to integrate information from the suppressed eye. For example the interocular transfer of the motion after-effect is undiminished when induced during binocular rivalry. Thus motion information processing may occur in parallel with the rivalry process. Here we describe a novel example in which the visual system simultaneously exhibits binocular rivalry and vision that integrates signals from both eyes. This apparent contradiction is resolved by postulating parallel visual processes devoted to the analyses of colour and motion information. Counterphased gratings are viewed dichoptically such that for one eye the grating is composed of alternating yellow and black stripes (luminance) while for the other it is composed of alternating red and green stripes (chrominance). When the gratings are fused, a moving grating is perceived. A consistent direction of motion can only be achieved if left and right monocular signals are integrated by the nervous system. Yet the apparent colour of the binocular percept alternates between red-green and yellow-black. These observations demonstrate the segregation of processing by the early motion system from that affording the perception of colour. Although, in this stimulus, colour information in itself can play no part in the cyclopean perception of motion direction, colour is carried along perceptually (filled in) by the moving pattern which is integrated from both eyes.  相似文献   

16.
为解决运动背景中视频对象的准确提取,提出一种基于全局运动的自适应视频对象分割算法。基于特征点计算帧间运动,利用最小二乘法计算摄像机仿射参数进行运动补偿,通过二值开闭重建滤波器进行预处理消除噪声;采用改进的分水岭算法将图像标记成不同的灰度区域,以自适应的光流法对分割的对象信息进行评判,从运动背景中分割出前景对象。实验表明,该算法能准确地从运动背景中分割出视频对象,显著地减少了动态前景对象的分割误差,提高了分割质量,可应用于运动目标检测与跟踪。  相似文献   

17.
L Petreanu  DA Gutnisky  D Huber  NL Xu  DH O'Connor  L Tian  L Looger  K Svoboda 《Nature》2012,489(7415):299-303
Cortical-feedback projections to primary sensory areas terminate most heavily in layer 1 (L1) of the neocortex, where they make synapses with tuft dendrites of pyramidal neurons. L1 input is thought to provide ‘contextual’ information, but the signals transmitted by L1 feedback remain uncharacterized. In the rodent somatosensory system, the spatially diffuse feedback projection from vibrissal motor cortex (vM1) to vibrissal somatosensory cortex (vS1, also known as the barrel cortex) may allow whisker touch to be interpreted in the context of whisker position to compute object location. When mice palpate objects with their whiskers to localize object features, whisker touch excites vS1 and later vM1 in a somatotopic manner. Here we use axonal calcium imaging to track activity in vM1-->vS1 afferents in L1 of the barrel cortex while mice performed whisker-dependent object localization. Spatially intermingled individual axons represent whisker movements, touch and other behavioural features. In a subpopulation of axons, activity depends on object location and persists for seconds after touch. Neurons in the barrel cortex thus have information to integrate movements and touches of multiple whiskers over time, key components of object identification and navigation by active touch.  相似文献   

18.
研究受扰恢复能力定量评定指标有助于揭示人体平衡控制机理,以及为平衡机能损伤患者提供科学的诊断方法和康复策略。基于F itts定律提出了4项上肢握物受扰恢复能力评定指标,并且通过在有无视觉条件下对握物状态下的上肢不同部位予以横向冲击的实验,验证了这4项指标的有效性。实验结果表明,肩关节的平衡恢复能力强于肘关节,无视觉情况下恢复时间增加,但恢复精度没有改变。  相似文献   

19.
T Masino  E I Knudsen 《Nature》1990,345(6274):434-437
To generate behaviour, the brain must transform sensory information into signals that are appropriate to control movement. Sensory and motor coordinate frames are fundamentally different, however: sensory coordinates are based on the spatiotemporal patterns of activity arising from the various sense organs, whereas motor coordinates are based on the pulling directions of muscles or groups of muscles. Results from psychophysical experiments suggest that in the process of transforming sensory information into motor control signals, the brain encodes movements in abstract or extrinsic coordinate frames, that is ones not closely related to the geometry of the sensory apparatus or of the skeletomusculature. Here we show that an abstract code underlies movements of the head by the barn owl. Specifically, the data show that subsequent to the retinotopic code for space in the optic tectum yet before the motor neuron code for muscle tensions there exists a code for head movement in which upward, downward, leftward and rightward components of movement are controlled by four functionally distinct neural circuits. Such independent coding of orthogonal components of movement may be a common intermediate step in the transformation of sensation into behaviour.  相似文献   

20.
针对传统柑橘园林机器人云台视野范围小、定位精度低等问题,在仿生眼平台上,提出一种基于连续自适应均值偏移(CAM shift)算法和变结构PID控制器相结合的目标跟踪方法。具体包括效仿人类眼颈的结构特点,设计一个4自由度的仿生眼视觉平台;建立仿生眼的成像模型,提出一种相机外参数自适应标定方法,以解决仿生眼平台外参数会随着相机位姿改变而不断变化的问题;提出基于CAM shift的仿生眼目标跟踪算法,成功实现了对于目标的持续跟踪;为提高仿生眼系统在饱和非线性特性下的动态响应性能,提出采用基于抗饱和(Anti-windup)的变结构自适应PID控制器来设计仿生眼控制系统。以柑橘为研究对象,在设计的仿生眼平台上开展试验。结果表明,本文方法左眼平均中心位置像素误差(CPE)为7.2,右眼CPE6.1,均小于设计所要求的CPE10,且每帧图像处理过程小于0.1s。系统能满足实际应用,具有较高的鲁棒性和准确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号