首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Nadler JW  Angelaki DE  DeAngelis GC 《Nature》2008,452(7187):642-645
Perception of depth is a fundamental challenge for the visual system, particularly for observers moving through their environment. The brain makes use of multiple visual cues to reconstruct the three-dimensional structure of a scene. One potent cue, motion parallax, frequently arises during translation of the observer because the images of objects at different distances move across the retina with different velocities. Human psychophysical studies have demonstrated that motion parallax can be a powerful depth cue, and motion parallax seems to be heavily exploited by animal species that lack highly developed binocular vision. However, little is known about the neural mechanisms that underlie this capacity. Here we show, by using a virtual-reality system to translate macaque monkeys (Macaca mulatta) while they viewed motion parallax displays that simulated objects at different depths, that many neurons in the middle temporal area (area MT) signal the sign of depth (near versus far) from motion parallax in the absence of other depth cues. To achieve this, neurons must combine visual motion with extra-retinal (non-visual) signals related to the animal's movement. Our findings suggest a new neural substrate for depth perception and demonstrate a robust interaction of visual and non-visual cues in area MT. Combined with previous studies that implicate area MT in depth perception based on binocular disparities, our results suggest that area MT contains a more general representation of three-dimensional space that makes use of multiple cues.  相似文献   

2.
Perception of shape from shading   总被引:7,自引:0,他引:7  
V S Ramachandran 《Nature》1988,331(6152):163-166
The human visual system can rapidly and accurately derive the three-dimensional orientation of surfaces by using variations in image intensity alone. This ability to perceive shape from shading is one of the most important yet poorly understood aspects of human vision. Here we present several findings which may help reveal computational mechanisms underlying this ability. First, we find that perception of shape from shading is a global operation which assumes that there is only one light source illuminating the entire visual image. This implies that if two identical objects are viewed simultaneously and illuminated from different angles, then we would be able to perceive three-dimensional shape accurately in only one of them at a time. Second, three-dimensional shapes that are defined exclusively by shading can provide tokens for the perception of apparent motion, suggesting that the motion mechanism is remarkably versatile in the kinds of inputs it can use. Lastly, the occluding edges which delineate an object from its background can also powerfully influence the perception of three-dimensional shape from shading.  相似文献   

3.
The perception of heading during eye movements.   总被引:5,自引:0,他引:5  
C S Royden  M S Banks  J A Crowell 《Nature》1992,360(6404):583-585
When a person walks through a rigid environment while holding eyes and head fixed, the pattern of retinal motion flows radially away from a point, the focus of expansion (Fig. 1a). Under such conditions of translation, heading corresponds to the focus of expansion and people identify it readily. But when making an eye/head movement to track an object off to the side, retinal motion is no longer radial (Fig. 1b). Heading perception in such situations has been modelled in two ways. Extra-retinal models monitor the velocity of rotational movements through proprioceptive or efference information from the extraocular and neck muscles and use that information to discount rotation effects. Retinal-image models determine (and eliminate) rotational components from the retinal image alone. These models have been tested by measuring heading perception under two conditions. First, observers judged heading while tracking a point on a simulated ground plane. Second, they fixated a stationary point and the flow field simulated the effects of a tracking eye movement. Extra-retinal models predict poorer performance in the simulated condition because the eyes do not move. Retinal-image models predict no difference in performance because the two conditions produce identical patterns of retinal motion. Warren and Hannon observed similar performance and concluded that people do not require extra-retinal information to judge heading with eye/head movements present, but they used extremely slow tracking eye movements of 0.2-1.2 deg s-1; a moving observer frequently tracks objects at much higher rates (L. Stark, personal communication). Here we examine heading judgements at higher, more typical eye movement velocities and find that people require extra-retinal information about eye position to perceive heading accurately under many viewing conditions.  相似文献   

4.
M J Morgan  S Benton 《Nature》1989,340(6232):385-386
If photographs are taken of moving objects at slow shutter speeds the images of the objects are blurred. In human vision, however, we are not normally conscious of blur from moving objects despite the fact that the temporal response of the photoreceptors is sluggish. It has been suggested that there are motion-deblurring mechanisms specifically to aid the visual system in the analysis of the shape of retinally moving targets. Models of motion deblurring have been influenced by the finding that certain very precise spatial pattern discriminations are unaffected by motion. An example is vernier hyperacuity, in which the observer must detect the direction of offset between two lines with abutting ends. With a stationary stimulus, observers can detect a vernier cue of less than 10 arcsec and acuity is unaffected by retinal-image motion of up to 3 deg s-1 We confirm this finding, but provide evidence against any general deblurring mechanism by showing that another kind of hyperacuity, discrimination of the distance between two parallel lines (spatial interval acuity), is interfered with by motion. This argues against a general deblurring mechanism, such as a neural network 'shifter circuit', and we point out that the high level of vernier acuity for moving stimuli is susceptible to an alternative explanation.  相似文献   

5.
J P Roy  R H Wurtz 《Nature》1990,348(6297):160-162
Movement of an observer through the environment generates motion on the retina. This optic flow provides information about the direction of self-motion, but only if it contains differential motion of elements at different depths. If the observer tracks a stationary object while moving in a direction different from his line of sight, the images of objects in the foreground and in the background move in opposite directions. We have found neurons in the cerebral cortex of monkeys that prefer one direction of motion when the disparity of a stimulus corresponds to foreground motion and prefer the opposite direction when the disparity corresponds to background motion. We propose that these neurons contribute a signal about the direction of self-motion.  相似文献   

6.
fMRI evidence for objects as the units of attentional selection.   总被引:18,自引:0,他引:18  
K M O'Craven  P E Downing  N Kanwisher 《Nature》1999,401(6753):584-587
Contrasting theories of visual attention emphasize selection by spatial location, visual features (such as motion or colour) or whole objects. Here we used functional magnetic resonance imaging (fMRI) to test key predictions of the object-based theory, which proposes that pre-attentive mechanisms segment the visual array into discrete objects, groups, or surfaces, which serve as targets for visual attention. Subjects viewed stimuli consisting of a face transparently superimposed on a house, with one moving and the other stationary. In different conditions, subjects attended to the face, the house or the motion. The magnetic resonance signal from each subject's fusiform face area, parahippocampal place area and area MT/MST provided a measure of the processing of faces, houses and visual motion, respectively. Although all three attributes occupied the same location, attending to one attribute of an object (such as the motion of a moving face) enhanced the neural representation not only of that attribute but also of the other attribute of the same object (for example, the face), compared with attributes of the other object (for example, the house). These results cannot be explained by models in which attention selects locations or features, and provide physiological evidence that whole objects are selected even when only one visual attribute is relevant.  相似文献   

7.
基于水平集方法的多运动目标分割   总被引:1,自引:1,他引:1  
给出了一种对光照变化等实际环境中干扰不敏感的基于水平集和背景减的运动目标检测与分割方法。首先建立了当前帧图像与背景参考帧图像间的纹理差,并将其与灰度差相结合构成差图像;然后,分析了纹理特征,并采用适当措施减小由场景中杂乱运动产生的不利影响,从而实现运动目标的鲁棒检测;最后采用了基于区域的水平集方法,将多个信息有机地结合在一起,实现对多个刚体或非刚体运动目标的分割。实际采集序列图的仿真实验验证了该方法的有效性。  相似文献   

8.
P McLeod  C Heywood  J Driver  J Zihl 《Nature》1989,339(6224):466-467
A visual cue that is often associated with significant stimuli, such as those provided by prey and predators, is movement relative to the observer. An efficient visual system should be able to direct attention to those parts of the visual field that contain such stimuli. What is needed is a system that can filter by movement difference. This could direct attention to a moving item among stationary items, or an item moving in one direction against a background moving in a different direction. Visual search experiments have shown that people are indeed able to filter by movement; that is, they can attend to just the moving items in arrays of moving and stationary stimuli. Single-cell recordings from monkey visual cortex show that the medial temporal cortical area (MT) has some of the properties required to filter by movement. We have now linked these two observations by showing that a patient with bilateral lesions to the presumed human homologue of MT cannot restrict visual attention to the moving items in arrays of both moving and stationary items. This suggests that MT is the site of a movement filter used in normal visual processing.  相似文献   

9.
Hearing visual motion in depth   总被引:9,自引:0,他引:9  
Kitagawa N  Ichihara S 《Nature》2002,416(6877):172-174
Auditory spatial perception is strongly affected by visual cues. For example, if auditory and visual stimuli are presented synchronously but from different positions, the auditory event is mislocated towards the locus of the visual stimulus-the ventriloquism effect. This 'visual capture' also occurs in motion perception in which a static auditory stimulus appears to move with the visual moving object. We investigated how the human perceptual system coordinates complementary inputs from auditory and visual senses. Here we show that an auditory aftereffect occurs from adaptation to visual motion in depth. After a few minutes of viewing a square moving in depth, a steady sound was perceived as changing loudness in the opposite direction. Adaptation to a combination of auditory and visual stimuli changing in a compatible direction increased the aftereffect and the effect of visual adaptation almost disappeared when the directions were opposite. On the other hand, listening to a sound changing in intensity did not affect the visual changing-size aftereffect. The results provide psychophysical evidence that, for processing of motion in depth, the auditory system responds to both auditory changing intensity and visual motion in depth.  相似文献   

10.
Jancke D  Chavane F  Naaman S  Grinvald A 《Nature》2004,428(6981):423-426
Exploring visual illusions reveals fundamental principles of cortical processing. Illusory motion perception of non-moving stimuli was described almost a century ago by Gestalt psychologists. However, the underlying neuronal mechanisms remain unknown. To explore cortical mechanisms underlying the 'line-motion' illusion, we used real-time optical imaging, which is highly sensitive to subthreshold activity. We examined, in the visual cortex of the anaesthetized cat, responses to five stimuli: a stationary small square and a long bar; a moving square; a drawn-out bar; and the well-known line-motion illusion, a stationary square briefly preceding a long stationary bar presentation. Whereas flashing the bar alone evoked the expected localized, short latency and high amplitude activity patterns, presenting a square 60-100 ms before a bar induced the dynamic activity patterns resembling that of fast movement. The preceding square, even though physically non-moving, created gradually propagating subthreshold cortical activity that must contribute to illusory motion, because it was indistinguishable from cortical representations of real motion in this area. These findings demonstrate the effect of spatio-temporal patterns of subthreshold synaptic potentials on cortical processing and the shaping of perception.  相似文献   

11.
Three-dimensional illusory contours and surfaces.   总被引:1,自引:0,他引:1  
G J Carman  L Welch 《Nature》1992,360(6404):585-587
Under general viewing conditions, objects are often partially camouflaged, obscured or occluded, thereby limiting information about their three-dimensional position, orientation and shape to incomplete and variable image cues. When presented with such partial cues, observers report perceiving 'illusory' contours and surfaces (forms) in regions having no physical image contrast. Here we report that three-dimensional illusory forms share three fundamental properties with 'real' forms: (1) the same forms are perceived using either stereo or motion parallax cues (cue invariance); (2) they retain their shape over changes in position and orientation relative to an observer (view stability); and (3) they can take the shape of general contours and surfaces in three dimensions (morphic generality). We hypothesize that illusory contours and surfaces are manifestations of a previously unnoticed visual process which constructs a representation of three-dimensional position, orientation and shape of objects from available image cues.  相似文献   

12.
Olveczky BP  Baccus SA  Meister M 《Nature》2003,423(6938):401-408
An important task in vision is to detect objects moving within a stationary scene. During normal viewing this is complicated by the presence of eye movements that continually scan the image across the retina, even during fixation. To detect moving objects, the brain must distinguish local motion within the scene from the global retinal image drift due to fixational eye movements. We have found that this process begins in the retina: a subset of retinal ganglion cells responds to motion in the receptive field centre, but only if the wider surround moves with a different trajectory. This selectivity for differential motion is independent of direction, and can be explained by a model of retinal circuitry that invokes pooling over nonlinear interneurons. The suppression by global image motion is probably mediated by polyaxonal, wide-field amacrine cells with transient responses. We show how a population of ganglion cells selective for differential motion can rapidly flag moving objects, and even segregate multiple moving objects.  相似文献   

13.
Parallel processing of motion and colour information   总被引:1,自引:0,他引:1  
T Carney  M Shadlen  E Switkes 《Nature》1987,328(6131):647-649
When the two eyes are confronted with sufficiently different versions of the visual environment, one or the other eye dominates perception in alternation. A similar situation may be created in the laboratory by presenting images to the left and right eyes which differ in orientation or colour. Although perception is dominated by one eye during rivalry, there are a number of instances in which visual processes nevertheless continue to integrate information from the suppressed eye. For example the interocular transfer of the motion after-effect is undiminished when induced during binocular rivalry. Thus motion information processing may occur in parallel with the rivalry process. Here we describe a novel example in which the visual system simultaneously exhibits binocular rivalry and vision that integrates signals from both eyes. This apparent contradiction is resolved by postulating parallel visual processes devoted to the analyses of colour and motion information. Counterphased gratings are viewed dichoptically such that for one eye the grating is composed of alternating yellow and black stripes (luminance) while for the other it is composed of alternating red and green stripes (chrominance). When the gratings are fused, a moving grating is perceived. A consistent direction of motion can only be achieved if left and right monocular signals are integrated by the nervous system. Yet the apparent colour of the binocular percept alternates between red-green and yellow-black. These observations demonstrate the segregation of processing by the early motion system from that affording the perception of colour. Although, in this stimulus, colour information in itself can play no part in the cyclopean perception of motion direction, colour is carried along perceptually (filled in) by the moving pattern which is integrated from both eyes.  相似文献   

14.
Thiele A  Stoner G 《Nature》2003,421(6921):366-370
Natural visual scenes are cluttered with multiple objects whose individual features must somehow be selectively linked (or 'bound') if perception is to coincide with reality. Recent neurophysiological evidence supports a 'binding-by-synchrony' hypothesis: neurons excited by features of the same object fire synchronously, while neurons excited by features of different objects do not. Moving plaid patterns offer a straightforward means to test this idea. By appropriate manipulations of apparent transparency, the component gratings of a plaid pattern can be seen as parts of a single coherently moving surface or as two non-coherently moving surfaces. We examined directional tuning and synchrony of area-MT neurons in awake, fixating primates in response to perceptually coherent and non-coherent plaid patterns. Here we show that directional tuning correlated highly with perceptual coherence, which is consistent with an earlier study. Although we found stimulus-dependent synchrony, coherent plaids elicited significantly less synchrony than did non-coherent plaids. Our data therefore do not support the binding-by-synchrony hypothesis as applied to this class of motion stimuli in area MT.  相似文献   

15.
Bloj MG  Kersten D  Hurlbert AC 《Nature》1999,402(6764):877-879
Objects in the natural world possess different visual attributes, including shape, colour, surface texture and motion. Previous perceptual studies have assumed that the brain analyses the colour of a surface independently of its three-dimensional shape and viewing geometry, although there are neural connections between colour and two-dimensional form processing early in the visual pathway. Here we show that colour perception is strongly influenced by three-dimensional shape perception in a novel, chromatic version of the Mach Card--a concave folded card with one side made of magenta paper and the other of white paper. The light reflected from the magenta paper casts a pinkish glow on the white side. The perceived colour of the white side changes from pale pink to deep magenta when the perceived shape of the card flips from concave to convex. The effect demonstrates that the human visual system incorporates knowledge of mutual illumination-the physics of light reflection between surfaces--at an early stage in colour perception.  相似文献   

16.
Through the development of a high-acuity fovea, primates with frontal eyes have acquired the ability to use binocular eye movements to track small objects moving in space. The smooth-pursuit system moves both eyes in the same direction to track movement in the frontal plane (frontal pursuit), whereas the vergence system moves left and right eyes in opposite directions to track targets moving towards or away from the observer (vergence tracking). In the cerebral cortex and brainstem, signals related to vergence eye movements--and the retinal disparity and blur signals that elicit them--are coded independently of signals related to frontal pursuit. Here we show that these types of signal are represented in a completely different way in the smooth-pursuit region of the frontal eye fields. Neurons of the frontal eye field modulate strongly during both frontal pursuit and vergence tracking, which results in three-dimensional cartesian representations of eye movements. We propose that the brain creates this distinctly different intermediate representation to allow these neurons to function as part of a system that enables primates to track and manipulate objects moving in three-dimensional space.  相似文献   

17.
Current views of the visual system assume that the primate brain analyses form and motion along largely independent pathways; they provide no insight into why form is sometimes interpreted as motion. In a series of psychophysical and electrophysiological experiments in humans and macaques, here we show that some form information is processed in the prototypical motion areas of the superior temporal sulcus (STS). First, we show that STS cells respond to dynamic Glass patterns, which contain no coherent motion but suggest a path of motion. Second, we show that when motion signals conflict with form signals suggesting a different path of motion, both humans and monkeys perceive motion in a compromised direction. This compromise also has a correlate in the responses of STS cells, which alter their direction preferences in the presence of conflicting implied motion information. We conclude that cells in the prototypical motion areas in the dorsal visual cortex process form that implies motion. Estimating motion by combining motion cues with form cues may be a strategy to deal with the complexities of motion perception in our natural environment.  相似文献   

18.
Transparency and coherence in human motion perception   总被引:3,自引:0,他引:3  
When confronted with moving images, the visual system often must decide whether the motion signals arise from a single object or from multiple objects. A special case of this problem arises when two independently moving gratings are superimposed. The gratings tend to cohere and move unambiguously in a single direction (pattern motion) instead of moving independently (component motion). Here we report that the tendency to see pattern motion depends very strongly on the luminance of the intersections (that is, to regions where the gratings overlap) relative to that of the gratings in a way that closely parallels the physics of transparency. When the luminance of these regions is chosen appropriately, pattern motion is destroyed and replaced by the appearance of two transparent gratings moving independently. The observations imply that motion detecting mechanisms in the visual system must have access to tacit 'knowledge' of the physics of transparency and that this knowledge can be used to segment the scene into different objects. The same knowledge could, in principle, be used to avoid confusing shadows with real object boundaries.  相似文献   

19.
This paper gives an efficient approach to reconstruct moving multiple objects(multi-object).Each object has independently rigid motion which includes translation and rotation.The traditional FBP algorithm can resolve the one-object motion problem rather well.However,it suffers from perceptible motion artifacts in multi-object cases.This paper proposes a new motion-compensated reconstruction approach with a priori knowledge of the rigid motion model.Both an FBP-type and an ART-type algorithm were derived.In ...  相似文献   

20.
动态场景中运动目标检测与跟踪   总被引:1,自引:0,他引:1  
为了在静态和动态场景中均能实现对运动目标的检测与跟踪,提出了基于运动检测和视频跟踪相结合的视频监控方法. 建立四参数运动仿射模型来描述全局运动,采用块匹配法对其进行参数估计;采用基于全局运动补偿的Horn-Schunck算法检测出运动目标;使用卡尔曼滤波对运动目标的质心位置、宽度和高度进行跟踪. 实验结果表明,该方法能够有效地对静态和动态场景中运动目标进行检测与跟踪.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号