Why do optic nerves cross over




















The deep layers of the SC appear to be a coordinating domain [ 72 , 73 ] involved in integration of information and contributing to effective guidance of movements [ 67 , 74 ]. As mentioned, the primate SC has precisely coordinated visual to visuomotor maps related to extra-ocular eye muscle function [ 44 ]. Due to IRPs emanating from the lateral retina, motor, tactile, proprioceptive, and visual information of the hand can be integrated in the contralateral region of the SC without interhemispheric communication when the hand operates in the ipsilateral visual hemifield Figure 1 c.

Multimodal neurons responding to tactile as well as to visual events have been observed also in the putamen [ 75 ] and in the parietal [ 76 ] and premotor [ 77 ] cortical regions. Studies utilizing visual images of alien, real, or false limbs have demonstrated that passive viewing of such body-parts can influence the perception of somatosensory stimuli [ 78 , 79 ]. Area 5 in the parietal lobe of the primate brain seems to be involved in monitoring the position and movement of the body.

Neurons in this area have been found to encode the position of a monkey's arm while the arm was covered from view. Area 5 neurons responded to the position of a visible realistic false arm, and distinguished a right from a left arm [ 50 , 78 ].

Dushanova et al. Approximately half of the M1 neurons that were active when monkeys performed a task were also active when they observed a human performing the same action. Eisenberg et al.

Notably, when subjects crossed their arms, cross-modal cuing effects were reduced [ 79 , 82 ]. Because visually-based directing of the hands with crossed arms relies on inter-hemispheric communication, crossing arms may simulate a visual system without IRP. It has been suggested that humans and animals form cognitive maps of their environment [ 83 ]. Such maps may be sensory or motor, and they may represent the external world or be body representations [ 74 ].

An alternative view is the simulation theory reviewed by Hesslow [ 84 ] , which proposes that a simulated action can elicit perceptual activity that resembles the activity that would have occurred if the action had actually been performed. Research has demonstrated a similarity between patterns mapped in the brain and concrete objects [ 85 ]. The retro-splenial cortex in humans seems to be directly involved in coordinating and translating egocentric and allocentric frames of reference.

The latter is a frame of reference that is centered on a point in space distinct from the space that the perceiver occupies [ 86 - 89 ]. The brain does not create a single unit representation of space, but produces numerous representations of space to achieve stable perception, spatial knowledge, and motor guidance [ 43 ].

The process of forming object representations in visual short-term memory from visible characteristics of a stimulus, such as color, shape, size, orientation, location, movement, etc. In contrast to other properties such as color and shape, location plays a key role by providing the spatial map to which individual features are attached, and eventually combined with, to form objects [ 90 , 91 ].

Thus, the location of the object of attention is an important factor in multimodal perception [ 90 , 91 ], and if visual short-term memory is seen as a map or a three dimensional coordinated system, the object of attention seems to have similarities with the origin, i.

Humans are able to grip objects whether the objects are heard, seen, or touched. Consequently, information about the location of objects is recoded in a joint-centered frame of reference, despite of the sensory modality involved [ 94 ]. The location of reaching targets may be encoded in an eye-centered frame of reference whether the targets are visual, auditory, proprioceptive, or imaginary [ 94 ]. The recalled eye-centered location is updated following each eye and head movement and also when vision is not used, which may reflect a predominant role of vision in human spatial perception [ 94 , 95 ].

Behavioral studies in humans and studies of reach-related cerebral areas in primates have highlighted the dominance of eye-centered coordinates in movement planning [ 96 ]. Recent research has revealed that the frame of reference may shift. Parietal area V6A contains neurons modulated by the direction of gaze as well as neurons that code the direction of arm movement.

The majority of V6A reaching neurons use a system that encompasses both of these reference frames [ 97 ]. The dorsal aspect of the premotor cortex PMd is another area highly involved in visually guided reaching. In the PMd, some neurons encode reaching goals using limb-centered reference frames, others employ eye-centered reference frames, while some cells encode reaching goals in a reference frame by the combined position of the eyes and hand [ 98 ].

Mulette et al. Studies of the SC [ ], the ventral premotor cortex [ ], and the dorsal premotor cortex [ ] have identified populations of neurons associated with arm movement that are either clearly eye-centered or consistent with eye-centered coding [ 96 ].

Studies of reaching movements to memorized targets in three-dimensional space with visual feedback of the moving extremity suggest a coordinated system that is centered on the line of sight [ - ].

Functional magnetic resonance imaging studies showed that the human premotor and posterior parietal cortex PPC contain neurons that specifically encode visual stimuli close to the hand suggesting that the premotor and PPC are involved in a mechanism for the selective representation of visual stimuli near the body in hand-centered coordinates [ ]. In the PPC there is considerable overlap among the regions that are important for spatial working memory, visually guided actions, and navigation, and the PPC contains specialized subunits for the processing of spatial goals of saccades and reaching movements.

Together these subunits are commonly labeled the parietal reach-region PRR , which corresponds primarily to the medial intraparietal cortex [ 43 ]. Bhattacharyya et al. Sorrento and Henriques [ ] studied the effects of gaze alterations on repeated arm movements toward a fixed target and found that, when the second movement was produced, it was guided by an updated, eye-centered, frame of reference.

Based on this and other studies [ , ], Medendorp [ 43 ] suggested that gaze-centered coordinates are vital for achieving spatial reliability in the motor system. Hand movements are characteristically initiated before the end of the orienting saccade to a target [ ].

This indicates that the peripheral vision information available to plan eye and hand movements relative to a target is the same, and that this information is stored in the visual short-term memory. Thus, the central nervous system may use a common spatial representation of targets to plan both eye and hand movements [ ].

Sighting dominance, i. However, Khan and Crawford [ , ] found that subjects altered ocular dominance as a function of horizontal gaze direction in a reaching task. Notably, the alternating of ocular dominance depends on which hand is used to reach out and grasp the target [ , ]. The EF hypothesis implies that the visual system is well equipped to serve the hand in a reaching task. Vision profoundly influences arm movements soon after birth [ ].

The preceding section demonstrated that gaze-centered coordinates are commonly used and essential in the visual directing of the hand [ 43 , 94 - 96 , ], and the alternating of ocular dominance in reaching may be another example [ , ]. Reaching movements in primates typically begin in the inferior quadrants of the visual field because of the lower position of the upper limb relative to the visual axis.

A bias in spatial discrimination during reaching movements in favor of the lower visual field has been described [ - ]. It was proposed that this may account for the faster manual reaction times reported for the lower visual field; the lower field bias influences the capacity of primates to reach for, grasp, and manipulate objects [ ]. Thura et al. Single neurons were recorded in the FEF of two monkeys as they executed a visually guided saccade task while keeping the hand at specific locations on a touch screen.

They concluded that visual and proprioceptive signals derived from the hand are integrated by FEF neurons, and showed that hand-related modulation is more pronounced in the lower than in the upper visual hemifield [ ]. The medial posterior parietal cortex area, V6A, is proposed to be the earliest node of the dorsal visual stream where visual, eye, and arm position-related information converge [ 23 , - ].

V6A contains arm movement related neurons that encode the direction of reach [ ], hand orientation [ ], and grip formation [ ]. Hence, multisensory encoding of space is likely to be realized in V6A [ 23 , ]. A predominance of visual neurons with receptive fields typically representing the lower visual field, where the hand usually operates, has been demonstrated in area V6A [ , ].

The conjunction of visual and somatosensory information is considered to form the representation of peripersonal space in many primate brain areas [ 79 , , ]. Hadjidimitrakis et al. They found that neurons in area V6A are sensitive to visual fixation and the location of the foveated target in three-dimensional space, and that they are more highly activated by gaze positions in the peripersonal space.

Viguer et al. The concept that binocular vision and abundant IRP result in stereopsis is well established. Studies have used the presence of binocular vision as a verification of stereopsis in, for example, Macropodidae [ ] and tyrannosaurs [ ]. This could also serve as evidence for the EF hypothesis, since visual control of forelimbs seems to be common in their foraging behavior [ ].

Based on the fossil record, Stevens [ ] concluded that Tyrannosaurus rex and other coelurosaurs possessed functional stereopsis. It may be that bipedal coelurosaurs commonly used the forelimbs in the binocular field. The anatomy of Tyrannosaurus rex and Troodon indicates considerable binocular vision below the axis of the head [ ], where the forelimbs were likely to operate.

It has been proposed that mammals and birds may use binocular visual fields differently [ , ]. Martin proposed that binocularity in birds does not result in stereopsis, with the possible exception of owls, rather its primary role is control of bill or feet position in foraging [ ]. Such visual control of body appendages provides functional analogies with the EF hypothesis [ 12 ].

Evidence demonstrates that communication among visual and motor neurons is slower when interhemispheric communication is required [ 24 - 27 ].

Multimodal sensory information used in hand coordination is likely to be transmitted slower in a primate brain without IRP. Moreover, data on multimodal sensory information and a primary role for gaze-centered coordinates in reaching tasks [ 43 , 94 - 96 , ] indicate that supervision of the hands is largely integrated with motor control. Neurons in the primary motor cortex responding to viewed actions of a hand [ 80 ], visual feedback resulting in modification of arm movements [ , ], multimodal neurons responding to tactile as well as visual events [ 69 - 71 , 75 - 77 ], and the reduction of cross-modal cuing effects when arms are crossed [ 79 , 82 ] simulating a visual system without IRP also support the EF hypothesis.

Ipsilateral retinal projections originating only from the temporal retina conform to the hypothesis, since IRP from the nasal retina would increase the need for interhemispheric communication [ 12 ]. The EF hypothesis can provide keys to the evolution of IRP in many non-mammalian vertebrates such as the high proportions of IRP in limbless but phylogenetically diverse animal groups such as snakes, caecilians, and cyclostomes Figure 1 a [ 12 ]. The low proportion of IRP in most fishes, birds, and reptiles is in accordance with the EF hypothesis [ 12 ].

The premise of depth perception through binocular disparity is largely restricted to mammals [ 11 , 12 ]. The X-ray hypothesis of Changizi and Shimojo [ 10 ] does not take into account that early primates were small compared to environmental objects such as leaves, and therefore early primates most likely did not achieve the suggested selective advantage of seeing through environmental objects [ 5 ]. It is commonly suggested that binocular vision is especially useful to predators for estimating the distance to prey, while animals preyed upon often have laterally situated eyes, which provides an ability to scan a broad area of the surroundings without moving the head [ ].

However, the NGM law has some inconsistencies. Predatory mammals such as dolphins display no IRP [ ], and the variation in IRP among non-mammalian vertebrates has been suggested to be inexplicable and to lack association with a predatory lifestyle [ 11 ]. Heightened depth perception within the working space of the hand has adaptive value in arboreal primates. Arboreal marsupials, as well as fruit bats that use claws on the wing to manipulate fruits [ - ], possess a primate-like visual system with a high proportion of IRP.

The EF hypothesis might be evaluated through comparative analyses of mammalian and non-mammalian associations among IRP, eye convergence, and visual guidance of forelimbs. Ultimately it is DNA that determines whether the axon of a retinal ganglion cell crosses or not [ 18 - 21 ], and transcription factors play vital roles in this process [ - ].

There are indications that visual guidance of forelimbs may have influenced the morphogenesis of the retina and the regionalization of the OC area in numerous vertebrate species [ 12 ].

Many molecules and mechanisms involved in OC formation have been conserved in evolution [ 21 ], and the EF hypothesis may provide the opportunity to explore associations between visual guidance of forelimbs and alterations in the DNA.

A predict of the EF hypothesis is that binocular vision should be expected in animals with forelimbs or similar appendages that habitually operate in front of the animal. This seems to be the case in praying mantises, insects that capture and manipulate prey with powerful forelimbs. The eyes of mantises offer a wide binocular field, and, at close range, precise stereoscopic vision [ ].

The proportion of IRP in mantises seems not to have been investigated, and offers opportunity for assessing the EF hypothesis in another phylum. Octopus vulgaris may be another candidate.

This species has been reported to combine arm location information with visual input to control complex goal directed movements [ ]. This review supports the principle that evolutionary modifications in the proportions of IRP in the primate brain contributed to visual guidance of the hands, and emphasizes that stereopsis is largely associated with visual directing of the hand. Accurate movement of primate forelimbs depends on continuous and reciprocal interaction between motor and sensory systems.

Goodale proposed that vision originally developed to control movement [ 1 - 3 ]. The EF hypothesis provides a rationale for the localization of eyes in primates and predatory mammals and is applicable in non-mammalian species.

In addition, the EF-hypothesis suggests how the classic vertebrate cross-lateralized organization for visually guided limb movements may have been preserved in early primates when they gradually changed their ecological niche to an arboreal lifestyle. It postulates that evolutionary change towards hemidecussation in the OC provided parsimonious and efficient neural pathways in animals with an increasing degree of frontal vision and frontally-directed, visually guided, motor behavior.

Further studies may clarify the extent to which the optic chiasm was a turning point in the evolution of stereopsis.

Thanks to three anonymous reviewers for valuable comments, to Eric Warrant for information about invertebrate visual system, and Pieter Medendorp and Kaspar Meyer for reading an early version and giving editorial comments.

I am grateful to Hans O. Richter for information about hand performance in the lower visual field and the Lucidus Consultancy for help with language and editorial advice.

National Center for Biotechnology Information , U. Journal List Front Zool v. Front Zool. Published online Jul Matz Larsson 1, 2. Author information Article notes Copyright and License information Disclaimer. Corresponding author. Matz Larsson: es. Received Jun 11; Accepted Jul 9. This article has been cited by other articles in PMC. Abstract The primate visual system has a uniquely high proportion of ipsilateral retinal projections, retinal ganglial cells that do not cross the midline in the optic chiasm.

Introduction It has been suggested that vision originated as a system for the control of distal movements [ 1 - 3 ]. Open in a separate window. Figure 1. Figure 2. Review Communication between cerebral hemispheres The primate lifestyle requires frequent relocations in space and coordinated movements of the eyes and hands to interact with objects [ 23 ].

Ipsilateral retinal projections and oculomotor function The achiasmatic syndrome, a rare genetic condition, offers a model of a human vision system without IRP [ 34 , 35 ] Figure 3. Figure 3. The split fovea In humans, the nasal retina projects to the contralateral hemisphere, while the temporal retina projects ipsilaterally Figure 1 c.

National Library of Medicine. Pituitary tumor. Updated May 6, Your Privacy Rights. To change or withdraw your consent choices for VerywellHealth. At any time, you can update your settings through the "EU Privacy" link at the bottom of any page. These choices will be signaled globally to our partners and will not affect browsing data. We and our partners process data to: Actively scan device characteristics for identification.

I Accept Show Purposes. Pituitary Adenoma and Your Vision. Was this page helpful? Thanks for your feedback! Sign Up. What are your concerns? Verywell Health uses only high-quality sources, including peer-reviewed studies, to support the facts within our articles. Read our editorial process to learn more about how we fact-check and keep our content accurate, reliable, and trustworthy. From developing new therapies that treat and prevent disease to helping people in need, we are committed to improving health and well-being around the world.

The Manual was first published in as a service to the community. Learn more about our commitment to Global Medical Knowledge. This site complies with the HONcode standard for trustworthy health information: verify here. Common Health Topics. Optic Nerve Disorders. Test your knowledge. From the retina, the first synapse is in the lateral geniculate nucleus of the thalamus. The next synapse is made in primary visual cortex in the occiptal lobe.

What happens if there is damage to the visual pathway? Different visual problems will occur depending on where the damage is. The black bars labeled 1 through 5 indicate where damage may occur and the chart to the right of the pathway indicates the resulting "blind" area gray shading of the visual field.

Damage at site 1: this would be like losing sight in the left eye.



0コメント

  • 1000 / 1000