Introduction to Psychology: Australian Edition
Lead Author(s): Meaghan Altman, Dawn Darlaston-Jones, Maddie Boe, Pat Dudgeon
Student Price: Contact us to learn more
Introduction to Psychology: Australian Edition presents material such as methods, memory and states of consciousness in an engaging way. As a result, students will be able to understand and synthesise the content better. This book is written with the principles that underlie learning and memory - the goal is to provide an exciting experience for students to help them retain information in the long term. The Australian edition features newly developed Indigenous Content and has been edited by our team of Australian editors.
Chapter 5: Sensation & Perception
As we discussed in Chapter 3, the brain is the only organ in the body to be completely encased in bone. While this encasement provides the obvious advantage of protecting the brain from injury or assault, it also means that the brain is completely isolated from the surrounding environment. In order to survive, however, the brain must steer the body through a world of information and simultaneously derive meaning. Humans can only detect a fraction of the information in the world; yet, using only what comes in from the senses, we are able to understand and navigate a three dimensional world of colour, sound, smell and touch. How does a brain that lives in the dark understand the light?
Nature has solved this problem in a variety of ways. Bees see ultraviolet light reflected from the petals of a flower. Aquatic mammals will emit low frequency calls that can be heard by potential mates hundreds of kilometres away and owls will use audition to hear faint high frequencies, such as the rustling of a small shrew under layers of snow. Turtles and birds can use magnetic fields to migrate thousands of kilometres accurately, while ants use smell to find their way to and from food sources. Although the means for accomplishing these tasks vary greatly, they all fundamentally solve the same underlying problem: how does an animal gather and use information about the world around it in order to survive?
It is important to note that there is nothing particularly special about vision, smell, sound or taste; it is what we do with that information that makes it special. Our sensory education begins before birth. Babies in the womb learn the sound of their mother's voice (Hepper, Scott, & Shahidullah) and will prefer her smell long before they can clearly see her (Verendi, Porter, & Winberg, 1994). They learn to prefer flavours that their mothers eat while pregnant and breastfeeding (Mennella, Jangnow, Beauchamp, 2001). From this point forward, we use our experiences with the world to make hypotheses about what new information means.
So how do we turn a world of meaningless information into a dynamic environment that we can effectively navigate? We use the information from the world, or sensations, to build our experiences. Sensations are features of the environment, like the electromagnetic wavelengths of light or changes in air pressure, creating sound, that we use to create an understanding of the world. Think of sensations as the raw materials of perception. These sensations are transduced, or translated, by the sensory system into the electrochemical language of the brain. The brain takes a given message and combines it with previous experience to create a perception. For instance, the sensation of 675 nm of light reflecting from my water bottle is transduced by cells in my eye and my brain uses this information to perceive red - and a water bottle for that matter.
Maryam is driving her car when a song with a loud and rhythmic bass comes on the radio. As she bobs her head to the music, she focuses on the sound and timing of the bass drum as it hits her ears. In thinking about the song this way, Maryam is focusing on
Selena is at an art exhibit on Pointillism, an artistic technique where an artist uses many small dots to create a whole image. When looking at this image , Selena does not focus on the individual dots, but rather sees a magician holding a flower in front of a background of dynamic patterns and colour. This experience best explains the process of ____________.
5.1.1 Top-down and Bottom-Up Processing
How do we see? Or hear? How do we understand what information means? We see by learning to see. There is no inherent meaning in information; it is how that information is used that gives it value.
Perception is only partly based on the information coming in from the world. It is easy to imagine that the eye works as a camera or that the ear is a recorder, but the process we use to create meaning from information is far more complicated. We also use memories about the way the world works to interpret these messages. Let’s consider some examples.
Yuo can raed this snetnece even wehn the leteress are msesed up. Tihs is bceause yuor brian uess porir exeprience with wrods and lagunage to raed.
If you find that you had difficulty reading the above sentence the first time, try it once more. You will find it will begin to make sense.
3V3N MΦR3 @M@ZINGLY, YΦƱ C@N R3@D THIS S3NTENC3 WITHΦƱT TΦΦ MƱCH DIFFCƱLTY, 3V3N THΦƱGH MΦST ΦF TH3 VΦW3LS H@V3 B33N R3PL@C3D
Even though you may find that it takes a few tries to really understand these sentences, you will notice that you can eventually interpret what they say. Once you do, you will not be able to undo the knowledge; that is, you will not be able to see it as nonsense again. The brain uses your prior understanding from years of reading experience to make a “guess." Once that guess makes sense, you use memory to apply it to future problems.
T@K3 TH1S S3NT3NC3 FΦR EX@MPL3, YΦƱ WILL NΦT1C3 TH@T TH1S 1S MƱCH 3@S1ER TΦ R3@D TH@N TH3 ΦN3 @BΦV3, 3V3NTHΦ Ʊ GH YΦƱ H@V3 N3V3R S33N 1T B3FΦR3.
Our perceptual world is created by combining two processes. The first, bottom-up processing, is the neural processing that starts with the physical message or sensations. This is the early level analysis that prepares the information for use. Top-down processing occurs when we combine this incoming neural message with our understanding of the world to interpret information in such a way that it has value. Perceptions are created from these processes working together.
I imagine after watching this you are rather amazed. There is nothing magical about the mask, but your brain is using its understanding of how faces work to interpret ambiguous information. To make this illusion work correctly, it is important to light the mask in such a way that the visual system can perceive the depth when the face is turned away from you. When your brain receives both the visual messages associated with dimension and the spatial organisation of a face, it applies its understanding of how faces operate and interprets the face as coming out toward you.
Using the above video as an example, please match the concept with its application.
Combining different sensations with knowledge of how faces work in order to form the perception that a face is coming out at you- even when you know that the mask is concave.
The processing of the configuration of a face (two eyes¸a nose¸and a mouth) and the pattern of light and shadow that communicate depth
5.1.2 The Principles of Gestalt
Psychologists have identified strong innate tendencies to organise information in particular ways. The Gestalt psychologists, whom we discussed in Chapter 0, believed that perception was more complicated than simply assembling messages as if they were pieces of a puzzle. Rather, they believed that we are born with specific predisposed ways of organising information so that it has utility. Think back to the last party you attended. While music was playing in the background and other people talked, you were probably able to "tune out" all the background noise in order to participate in one conversation. This principle of figure-ground highlights one fundamental way we organise information. Certain information is given priority over the background. We see letters as separate from the page, one flavour stands out from the rest of the bite, and your teacher in front of the room can be seen as the background to the YouTube video you play during class on your laptop. Knock that off, by the way.
The laws of Gestalt or the Gestalt principles of organisation outline some fundamental ways we see the world. Organising the world into figure-ground is one, however, there are several other "tricks" the brain uses to group and understand the world.
The principle of proximity states that objects that are close to one another will be grouped together. It is most likely you see the image on the left as one block of 36 uniformly distributed dots or a square. It is also probable that when you look at the image on the right, you see three rows of 12 dots. The primary difference between these two images is the spatial organisation of the dots on the page, but this small difference can lead to a different perception.
The principle of similarity states that objects that are physically similar to one another will be grouped together. Think of the spectators at a football game; it is usually pretty easy to tell who wants which team to win. We instantly and easily group people by the team colours.
In the image below, it is probable that you see rows of black and blue dots rather than a single block of 36 dots.
The principle of closure states that people tend to perceive whole objects even when part of that information is missing. Looking at the images below; it is unlikely that you see nonsense or random lines, and much more likely that you have organised these lines into the perception of a circle and the perception of a rectangle. This is because, despite missing some information, the lines that are available are sending a coherent message.
The principle of good continuation states that if lines cross or are interrupted, people tend to still see continuously flowing lines. You are most likely to see two keys here even though part of the image is interrupted.
Finally, the principle of common fate states that objects that are moving together will be grouped together. When you see a flock of birds moving through the air, you are more likely to see them as a whole group and not a mass of individual dots.
Using these principles, we are able to organise information in a predictable and meaningful way.
In the image below, you most likely separate the stadium into two groups: red shirts and blue shirts.  This is most likely because you are using the principles of __________. (choose all that apply)
At second four of the video above, you most likely saw two distinct groups of fish. You may have grouped them because of the law of similarity (one group is blue and the other is yellow, black, and white). What is another possible reason for grouping the fish this way?
The principle of common fate
The principle of continuation
The principle of figure ground
The principle of simplicity
All of the above are reasons
When looking at this picture it is probable that you see the ballet dancer as separate from the mirror.  This is best explained by the principle of ________.
5.2 Vision: From Light to Sight
5.2.1 The Eye
Humans are a visual species; in fact, 20% of the cortex plays a role in the interpretation of visual information (Wandell, Dumoulin, & Brewer, 2007). So much work goes into the act of simply looking, yet we take it completely for granted.
Light is a form of electromagnetic radiation. Although the spectrum spans from Gamma rays to Radio waves, we are only able to see a narrow band, ranging from around 400-700 nanometers of light. From this, we are able to perceive and navigate a dynamic world of light, colour, shadow, and depth.
Light travels from the sun through my window to a surface, perhaps my dog looking sadly at me and waiting to go on his walk. Although some light is absorbed by Ziggy, much of it is reflected from the surface of his cute face, through the air and atmosphere. From the moment a wave of light enters my eye, the eye actively adjusts its behaviour to maximise the quality of light that reaches the sensory cells in the retina. The first obstacle the image of my dog must go through is my cornea. This outermost, transparent, protective layer of my eye contributes to my ability to focus on my dog. Light refracted from Ziggy then enters my eye through the pupil; a hole that expands and contracts depending on the environment. Because the cells in the back of the eye are sensitive to light, it is important to regulate how much enters the eye. In brighter environments, the pupil can become quite small to reduce the amount of light. In dimmer conditions, the pupil dilates so that more light can reach the retina. The size of the pupil is controlled by the relaxation or tension in a band of muscles attached to the Iris. The Iris gives your eyes their colour, but does not play a specific functional role in vision.
Behind the pupil, light travels through the lens. This flexible piece of tissue is layered like an onion, and it helps refract light and bring my dog into focus against the sensory cells in my retina. This process is known as accommodation and is determined by the distance between the lens and the object being viewed. When an object is close to you, your lens is thicker and rounder, as an object moves further away, muscles attached to the lens relax and the lens elongates.
Two common vision problems can be traced directly to the lens operating incorrectly. Individuals who are nearsighted have lenses that bring light into focus before reaching the retina. Individuals with myopia (nearsightedness) can see objects clearly when they are close, but as objects move further away they quickly become harder to discern. On the other hand, individuals with farsighted vision can see objects in the distance quite clearly, but as objects move closer, they become perceived as blurry. This is because the lens refracts light so that it focuses behind the retina.
Light then must pass through five layers of cells in my retina to arrive at the photosensitive cells in the back of my eye; this is where light is transduced into cellular activity. This will be the last time the message is in the form of electromagnetic energy. From this point forward, the image of my dog will be composed of patterns of neural firing.
In the retina, two kinds of specialised photosensitive cells called rods and cones transduce energy into neural language. This translation is chemically based, as each cell contains a photopigment that is sensitive to light. The chemical reaction leads the cell to send a message to the adjacent neurons, and ultimately, these neural impulses are sent to the brain.
The back of each retina contains approximately 126 million photosensitive cells (give or take a few), each corresponding to a particular region of the visual field. There are two different kinds of cells, rods and cones, and both serve a particular function. In the centre of the retina, directly behind the pupil, you will find the fovea, a dense cluster of approximately 6 million cones. These cells respond best when there is a lot of light in the environment. Because only a few cells connect to adjacent ganglion cells, cones also transmit information about fine detail, a process known as visual acuity. The 120 million rods in each eye are primarily found in the periphery of the retina. Rods are typically sensitive at lower levels of light and are the primary cells used for night vision. That is, the rods will generate neural impulses even when there is only a small amount of light available. This is why it is sometimes much easier to see something in the dark if it falls slightly to the side of the fovea. It should not surprise you, then, that most nocturnal mammals have a large percentage of rods in their retinas.
Dark adaptation occurs as rods and cones adapt to changes in light. When entering a dark classroom after being outside in the bright daylight, most people can not see much. This occurs because the cells in your eye need time to adjust to the sudden change in light. Dark adaptation occurs in two stages. First, the cones rapidly respond to the change in light. After about eight minutes in the dark, the cones cannot become any more sensitive; the rods, however, will continue to increase their sensitivity for an additional 20 minutes. When you try to navigate in the dark, you are indeed quite limited in what you can see directly in front of you.
A second difference between the rods and cones is that cones are the only cells that communicate information about the wavelength, perceived as colour, of an object. The rods respond only to the amount of light, but do not communicate information about the quality of that light. That is, although we have a mental picture of the world in full colour and detail, the retinal image that hits our eye is much closer to this.
You will notice that the image is upside down. This is because during refraction, the lens inverts the image. The brain uses its prior understanding of how the world works to present a conscious perception of the visual world as right-side up. You will also notice that only the centre of the image is in focus or in colour. This is the portion of the eye propagated by high acuity, colour sensitive cones. Further from the centre, the image becomes blurrier and changes from colour to black and white. This is the portion of the visual field processed by the rods. Rods often get a bad rap as being the less interesting of the two photoreceptors, but this reputation is not particularly deserved. Rods help compile early processing about locations of objects and the location of motion in the environment.
You will probably balk at this image, as this is not the world we “see.” The world in our mind is one full of rich detail and colour; this, again, is the result of the brain making assumptions about the world and filling in the gaps. How does the retinal image become this rich visual perception? The image of my dog still has a long way to travel before I know what I am looking at.
Arrange the parts of the eye below in the order that light must travel to reach the rods and cones.
Match the structure with its function.
High acuity photoreceptors that process early information about colour
Highly sensitive photoreceptors that respond to low levels of light
Helps to focus light on the photosensitive cells
The structure that gives your eye its colour
Protective outer layer of the eye
Hole allowing light to enter
Imagine you are in a biology class and you are dissecting the eye of an animal that hunts at night. What prediction would you make about the retina of this animal?
This animal would have a large number of cones
This animal would have a large number of rods
This animal would not have a blind spot
This animal would not rely on vision to hunt
This animal would have pupils that can become small
Imagine you witness a car accident and instead of stopping, one car continues to drive away. As you focus on the license plate of the fleeing car which part of your eye are you using?
Rods in the centre of your vision
Cones in the periphery of your vision
Your blind spot
Cones in the centre of your vision
Rods in the periphery of your vision
Please click on the portion of the eye with the highest concentration of cones, also known as the fovea. To ensure accurate grading please click in one of the circles. 
Please click on the structure that bends to accommodate light entering the eye in order to focus it on the retina. To ensure accurate grading please click in one of the circles.
Please click on the portion of the eye that has the highest concentration of rods. To ensure accurate grading please click in one of the circles.
5.2.2 The Retina
I told you a few moments ago that light must first travel through several layers of cells in order to reach the photoreceptors. After the rods and cones react to light, they send their messages to bipolar cells. Even this early on in the pathway, the cells are beginning to interpret the information entering the eye. Bipolar cells summate the firing of several photoreceptors and send a different kind of message to a ganglion cell. The number and kinds of connections that bipolar cells make are determined in large part by their location. In the peripheral vision, cells commonly referred to as diffuse bipolar cells can receive messages from as many as 50 rods. Diffuse bipolar cells then summate the experience of the photoreceptors and send a single message to the ganglion cell. By comparison, midget bipolar cells receive input from only a single cone, and this message may be sent to only a single ganglion cell. This explains the difference in visual acuity across the surface of the retina. While midget bipolar cells in the centre receive a large amount of information about the qualities of a single point of light, cells in the periphery receive little information from a much larger area of the retina.
The diffuse bipolar cells connect to as many as 50 .
The advantage to these first few steps is that they allow the brain to receive a slightly more complex message. Each ganglion cell has a receptive field, meaning that each ganglion responds to activity only when light falls on a specific portion of the eye and only when specific cells are active. The receptive fields of the ganglion cells are often organised in a centre-surround fashion. That is, when light falls on the centre of the receptive field, the cell will respond more rapidly, but when the signal falls on the surround part of the receptive field, the cell reduces the firing rate. This allows for a single cell to send a variety of information about the surface of an object. Just as there are several types of photoreceptor and bipolar cells, there are also several kinds of Ganglion cells. Small ganglion cells (often called P cells) receive information from the midget bipolar cells. P cells make up approximately 70% of the ganglion cells in the retina and send signals to the brain about qualities of colour and detail. Larger ganglions (M cells) are found in the periphery and receive their signals from the diffuse bipolar cells. These signals send information about motion and visual stimuli in the periphery. Notice we have a complete reversal of the distribution of information from the retina to what is entering the brain. Approximately 70% of the message has been created from the small portion of the retina covered by cones and the remaining 30% of the message has been consolidated from 120 million rods.
As you can see from the image below, the ganglion cells can now send several messages about the qualities of light stimulating the receptive fields. When a small point of light falls on the centre, the centre-on cells increase their firing rate, while the centre-off cells decrease the rate of firing. When a small portion of light hits the surround, the centre-on will inhibit and the centre-off will excite. You will notice that both cells fire at the same rate when the entire receptive field is illuminated. This uniform firing allows both cells to communicate that there are no discernible differences in light across the receptive fields.
Click on the portion of the retina that makes up ~ 70% of the information leaving the eye. 
Click on the portion of the retina that is processed by the diffuse bipolar cells and M-cells. 
Even with this early organisation, the brain is able to understand more complicated information than simply analysing millions of points of light. For instance, the center-surround organisation of the ganglion cells helps the brain interpret where edges of objects begin and end.
Assume the cell below is a centre-on cell. Match pattern of cellular firing with the correct receptive field.
Receptive Field 1
Receptive Field 2
Receptive Field 3
Receptive Field 4
Click on the cells that transmit information from the rods to the M-cells.
Click on the ganglion that will transmit information about colour to the brain.
Click on the ganglion that will transmit information from the periphery of the retina.
The messages finally leave the eye and enter the brain via the optic nerve, made up of the axons of both the M and the P ganglion cells. As a result, there is a spot on the retina where there are no photoreceptors. This creates a small blind spot in each eye; it is not noticed because the brain uses information from the other eye and the assumptions about the world to “fill in” the gap.
If you would like to see your blind spot, look at the image below while covering one eye. Focus on the centre red dot and slowly move closer to the screen (or bring the screen closer to you). You will notice that at a certain point, one yellow star will disappear as it moves into the portion of the retina where the axons leave the eye. Even more interestingly, you should also notice that where once there was a star, there is now a continuation of the spiral pattern. That is an example of your brain filling in its best guess of what should be there.
If you completed the demonstration above with your left eye covered, the star on your hand side should have disappeared.
The blind spot is the place where the axons of the cells leave the eye.
5.2.3 To the Visual Cortex
This message travels to the optic chiasm, where the axons from each eye are reorganised for more sophisticated processing. Information from the right side of both eyes is sent to the left hemisphere, while information on the left side of the retina in both eyes is sent to the right side of the brain.
My brain’s first interaction with the image itself occurs after this split in the Lateral Geniculate Nucleus (LGN) of the thalamus. In Chapter 3 we discussed that the Thalamus is the relay center for the brain. Here, several kinds of sensory information are analysed and reorganised before the message travels to the cortex. The LGN is organised into six sublayers, and each layer deals with specific types of information that correspond to both the M and P cells. Information is further combined and consolidated before traveling to the back of the brain in the Visual Striate Cortex.
The Visual Striate Cortex, or Visual Cortex (VC), is located in the occipital lobe. Here, important features of the visual world are assembled and identified. We have over 30 areas in the back of the brain dedicated to analysing and organising visual information. Throughout this entire pathway, every neuron maintains a spatial organisation. That is, a specific point on my retina (Point A) is represented along the visual pathway, and a point right next to it (Point B) is represented by neurons right next to those responding to Point A. This spatial organisation, known as retinotopic organisation, is how we maintain a map of the visual world throughout processing. In the Visual Cortex, these points are assembled into lines and edges or features. Feature detectors are specialised cells in the VC that respond most actively to specific stimuli.
Hubel and Wiesel (1962) discovered feature detectors while examining the cortex of cats and monkeys. Using electron recordings of single cells in the visual cortex of their animal participants, researchers identified one type of feature detector known as a simple cell. This cell response to small stationary bars of light oriented at specific angles. Complex cells respond most vigorously to vertical lines in motion. As the line moves further from a vertical orientation, the cell will decrease its firing rate.
A second class of cell responds to lines of particular orientations that are moving in specific directions. For instance, one cell might increase its firing rate when a vertical line is moving from left to right, like when you watch a person walking down the street, but not when the line is moving up and down, like when the same person jumps up and down. The following video shows Hubel & Wizel discovering complex and hyper complex cells in the visual cortex of a cat.
However, even after analysis in the Visual Cortex, my brain is still not done. The information about my dog is sent to other regions of my cortex, where I use the assembled visual information to understand still more complex parts of the visual message. Information travels along the Ventral stream, also known as the What stream, to the temporal lobe. Here, visual information is identified and I know that I am looking at a dog. A second pathway, the Dorsal stream or Where pathway, carries visual information to the parietal lobe, where I use the incoming visual information to understand that my dog is to my left and by the door. Visual information also travels to the limbic system, which helps provide the warm fuzzy feeling I experience when I see my dog. It is humbling to think of the process that my brain goes through just to understand that my dog is sitting by the door. This realisation becomes even more humbling when you think that this same process occurs every time you shift your gaze.
The LGN is also known as which of the following?
Lateral Geniculate Nucleus
Lateral Ganglion Nucleus
Longitudinal Ganglion Nucleus
Longitudinal Geniculate Nucleus
Correctly order the pathway light must travel to reach the VC.
Diffuse and midget bipolar cells
Lateral Geniculate Nucleus of the Thalamus
M-clls and P-cells
The Dorsal stream is also known as the stream.
5.2.4 Colour Vision
Thus far we have talked about the pathway light must travel to reach simple analysis, but what about other types of information? Colour is among the simplest things the brain does, but as you will see, it is not so simple. Qualities associated with colour are processed at every stop along the visual pathway, starting with the cones in the eye.
Colour is the perception of wavelength; longer wavelengths create the perception of red (~670nm), medium length waves produce greens (530 nm), and shorter wavelengths (~450 nm) create perceptions of blues. White light, although appearing colourless, is actually an equal representation of all wavelengths. The reason a person’s shirt appears red is because pigment in the shirt absorbs the medium and short wavelengths, reflecting only reds back to the eye. The reason we see the rainbow is because light refracts through the water in the air, meaning that your brain creates rainbows.
There is no colour in the world. Rather, our brain uses information to create colour. We do not do this because colour is pretty, but rather because we have been able to derive information that is useful. For instance, riper fruit is more nutritious, but it also (generally) turns a bright colour such as red, orange, or yellow. Our ability to perceive this change between the surface properties of the fruit and their green background makes the task of locating this food source particularly easy.
When light reaches the retina, the cones in the fovea respond to qualities of wavelength. The human eye has three types of cones. One type of cone responds maximally to short wavelengths, which we perceive as blue. Perhaps unimaginatively, these have been named short cones or just S-cones. Similarly, we have medium wavelength cones or M-cones that respond best to greens and long wavelength cones (L-cones) that respond to oranges and reds.
The activity level of each cone results from the amount of photopigment in the cell.
The trichromatic theory of colour vision proposes that colour information is identified by comparing the activation of the different cones. For example, when you see a blue car, it is because the car is reflecting short wavelengths to your eye which activate the S-cones, but not the M or L cones. When you see a red stoplight, it is because the light reflected from the light activates the long-cones, but not the S or M cones. When you see a colour like pumpkin orange, more than one type of cone is active; most colours we experience are a mixture of wavelengths. Trichromatic theory explains several interesting components of colour vision, including colourblindness. On occasion, individuals can be born without one type of cone. For instance, there are two types of Red-Green colour blindness. The first, known as Deuteranopia, occurs when “green” cones have “red” photopigment; a second kind of colourblindness, Protanopia, occurs when “red” cones have “green” photopigment. Because the cells respond equally to these two wavelengths, the brain cannot perceive a difference between them.
5.2.4 Colour Vision: Opponent Process
The trichromatic theory is well supported, but it does not explain all aspects of colour vision. Trichromatic theory has difficulty explaining how people perceive yellow. For instance, I imagine you can picture a colour that is yellowish-red or blueish-green, but can you picture a greenish red?
The cones send their messages to the midget bipolar cells and then to the P ganglion cells. The P cells operate slightly differently than the other centre-surround organisation we have discussed. The P cells will respond vigorously to one wavelength and reduce their firing if they receive a signal indicating a different one. These colours have been paired so that the cell will increase its firing rate if it receives a message from one colour and will decrease if it receives a message from another. This creates six colours; red and green are paired, as are blue and yellow and black and white. To understand how this works, look at the image below. Right below this gentleman's glasses, you will see a small yellow dot. Focus on this spot for about 45-60 seconds.
As you look at the yellow dot, the P-cells under red illumination will send a message that red is present and green is absent. When you look away, the cells signalling "red" reduce their firing rate. Your brain interprets the reduction in firing as presence of green. This is known as an image after effect. This opponent process organisation is maintained in the LGN of the thalamus.
TED Talk: Beau Lotto: Optical illusions show how we see
In the demonstration at the beginning of the talk, which colour was the same?
Match the cone with the colour you are likely to perceive when that cone fires
Black and white
Imagine you want to make an after-image of the Australian Flag. What colour combination should you use?
Black, yellow, and green
Red, white, and blue
Blue, green, and purple
Red, green, and yellow
Black, green, and blue
According to Beau Lotto, how does colour enable us to see light?
Depending on the way we are taught to see colour
According to the quality of light they reflect
The same way our ancestors experienced smell
The same way a bat perceives texture through sound
How do bees see colour?
Bees do not really "see" colour, rather they can sense the electromagnetic properties through sense organs in their feet
Bees see colour using different mechanisms than we do
Bees see colour the same way we do, by using the relationship between different coloured surfaces
Bees eyes are sensitive to "pure" colours like red, green and blue- but not to yellows or purples.
Juan is running an experiment that asks people to manipulate three different kinds of wavelengths in order to match colour. It would seem that Juan is running an experiment relying on which of the following?
Opponent process theory
Colour by component parts theory
A wavelength of 680 nm is most likely to be perceived as ________.
5.2.5 Perceiving Depth
Depth perception is another incredibly useful ability that we often take for granted. The brain uses both bottom-up and top-down processing to understand what the retinal image is communicating about depth. For instance, although a house in the distance may appear quite small, your perception is likely to be that the house is far away instead of tiny. Similarly, my students understand as I pace back and forth across the room that I am not growing or shrinking. The brain uses reliable cues to infer information about depth. There are two kinds of depth cues: those that require only one eye, or monocular depth cues, and cues that require two eyes, known as binocular depth cues.
Monocular Depth Cues
Monocular cues are also referred to as Pictorial Cues, or cues that can be represented on a two-dimensional canvas. The first cue we will discuss is that of Occlusion. Occlusion occurs when one image partially blocks the view of a second object. The partially hidden object is seen as further away than the whole object.
It is most probable that you interpret the grey puppy as being in front of the white dog rather than interpreting the white dog as only having a front half of his body. This is because the grey puppy is occluding the white puppy.
A second depth cue is known as Relative Height. To effectively use this cue, we must also use our knowledge of the horizon. Objects closer to the horizon will appear further away and the greater the distance between the object and the horizon, the closer the object will appear. For example, in figure 5.23 the man crossing the street is lower in the image than the bus, and we perceive him as closer.
Relative Size also relies on our understanding of the world. According to this cue, when two objects are of equal size, the one that is further away will take up a smaller portion of the retina. In the image below, you probably see the small baby elephant as closer to you than the larger adults coming over the hill. This is, in part, because the baby is taking up a larger portion of your retina.
A well known illusion called the "Ames Room" takes advantage of the assumption of Relative Size.
When you look at the room head on, you assume that you are looking at a rectangular shaped room. The room is also carefully constructed to give you cues that it is indeed a rectangular shape. This is an illusion; in actuality, one corner of the room is more than two times the distance away from you. The following video outlines the construction and history behind the Ames room, discussing how and why it works along with a brief demonstration of the room.
Perspective convergence is a common cue used in landscapes, and it is a reliable cue for depth. As parallel lines move away from us into the distance, they seem to converge or come closer together.
We use the cue of familiar size when we judge distances based on our knowledge of that object's size.
You likely see this lighthouse as far away, in part because you understand that lighthouses are not tiny.
Atmospheric perspective occurs when more distant objects appear hazy and often have a slight blue tint. This is because as the distance between us increases, the more air particles, dust, pollution and water droplets occupy the space between your eyes and the object.
Binocular cues require input from both eyes. Often, the brain is making comparisons between the two eyes in order to understand depth. Because your eyes are in slightly different locations on your head, each retina has a slightly different image of the world. This retinal disparity is a useful cue because as images become farther away, they have a smaller degree of disparity on the retinas. The brain calculates depth information by comparing the images on the right and left eyes.
The brain also uses the degree to which the eyes must turn inward to focus on an object. Imagine trying to look at a snowflake on the tip of your nose. Your eyes would have to turn drastically inward to bring the snowflake into focus on your fovea. When an object is in the distance, the eyes look mostly straight ahead. The brain can use the tension in the muscles in the eye to make determinations about depth.
Match the structure with its function/definition.
Ganglion cells that receive information related to colour and detail from the midget bipolar cells.
Motion sensitive photoreceptors found in the periphery. Sends signals to diffuse biploar cells.
Midget bipolar cells
Portion of the retina where the ganglion axons exit the eye.
Visual portion of the thalamus; receives messages from M & P cells.
Large ganglion cells that receive their input from diffuse bipolar cells.
Location of higher level processing in the brain and the location of feature detectors.
Colour sensitive photoreceptors found in the fovea.
Specialised cells in the Visual Cortex that respond to lines oriented in specific ways and moving in specific directions.
Specialised cells in the Visual Cortex that respond to stationary lines of a particular orientation.
Small bipolar cells that receive messages from cones.
What is the correct order light must go through to be analysed by the visual system.
Lateral Geniculate Nucleus
Dorsal and Ventral Stream
Match the depth cue with its correct description.
When similarly sized objects are at different distances- the larger object is seen as closer.
When objects are below the horizon - the lower they are in the scene the closer they are interpreted as being.
Parallel lines appear to grow closer to each other in the distance.
Molecules in the air make objects farther away appear hazier.
When one object partly blocks the view of a second object - the object in front is seen as closer.
5.3 Hearing and Sound
Sound is perhaps not as prominent in human consciousness as the eyes; no one falls in love with someone because of their deep, soulful ears. However, for mammals, sound is one of the most fundamental senses we possess. The first mammals lived underground in the dark, so vocalisations served as the primary means of communication between mother and offspring. In humans, infants are born recognising the sound of their mother’s voice and an infant’s first cries are among the most immediate experiences after birth. Using sounds, we can localise objects in space, and unlike vision, sound works just as well in the dark. In fact, animals that do not use light rely quite heavily on sound. Richard Dawkins has famously pondered whether bats hear in colour, and so they might when you consider that using only high frequency clicks they can navigate a three-dimensional world, avoiding one another and any obstacles in their way, while locating a small bug flying through the air. Sound is a rich source of information; equally amazing is the intricate hardware that we use to understand it.
The physical message of sound is a form of energy, like light, that travels in a wave. Sound is a mechanical energy and requires a medium like air or water in order to move through space. What our brains interpret as sound is actually many small vibrating air molecules. They collide with other molecules and the pressure travels across distance. It is possible to feel the pressure; if you have ever been to a concert and stand near the speaker, you can feel the force of the sound wave hit your chest. It is worth noting however, that you shouldn’t stand there too long, as exposure to loud sounds will damage your hearing.
The frequency of the sound is determined by the rate of vibrations. We perceive high frequency sounds as having a higher pitch. People can hear frequencies between 20-20,000 Hz, but hear best between 1000 Hz – 5000 Hz - this is also the range of speech.
A second dimension of sound is the intensity of the wave, which we perceive as loudness. Increased intensity causes the amplitude of the wave to increase and the wave arrives at our ear with more force. We measure the amplitude of a wave in decibels or dB. When a sound above 100 dB reaches our ears, the force of the pressure can cause damage to the structures in the middle and inner ear. Among the reasons we feel discomfort in our ears with rapid elevation shifts, such as when we are on planes, is because the ear is a pressure sensor. It is precisely for this reason the ear is helpful for animals like marine mammals to navigate in a three dimensional world.
5.3.1 Entering the Ear
Sound enters your ear through your pinna. The pinna is the part you pierce and is shaped in such a way that it helps to filter the sound into the ear canal toward the tympanic membrane, also referred to as the eardrum. The surface of the eardrum works just like the surface of a drum and transfers energy to the three smallest bones in the body, the Ossicles of the middle ear. The ossicles consist of the malleus, the incus and the stapes. These bones help to amplify the vibrations as they travel further into the inner ear. The stapes is connected to a small membrane called the oval window.
The oval window transfers these vibrations to the bony sound processor of the inner ear, the cochlea. This is where sound is transferred into the neural language of the brain. Inside the cochlea is a flexible piece of tissue called the basilar membrane. Transduction occurs when the vibrations against the oval window cause fluid inside the cochlea to move. The fluid pushes against thin fibers known as cilia that are attached to the sensory hair cells. Sound causes the basilar membrane to "ripple." This motion causes the cilia to bend, in turn causing an excitatory message to cascade from the ear to the brain via the auditory nerve.
Which of the following is not one of the ossicles?
Another term for the tympanic membrane is the __________.
Please click on the pinna.
Please click on the portion of the ear that houses the basilar membrane.
Please click on the malleus, incus, and stapes.
The loudness of the sound corresponds to which of the following?
Amplitude of the wave
Intensity of the wave
Frequency of the wave
All of the above
Interestingly, while your eye has four different types of cells to code different types of visual information, all the hair cells in your ears are the same. Equally problematic, while light is spatially organised on the retina and this map is maintained through the brain to the cortex, sounds from many sources, complexities, amplitudes and frequencies all arrive at the ear at the same time. How does your brain make sense of this sea of information?
As with vision, the brain uses qualities of the sound to infer meaning. Different frequencies will cause cells to fire at different locations on the membrane. Higher frequencies cause cells to excite cells closest to the oval window, while lower frequency sounds excite the cells deeper in the cochlea. The brain uses the location of neural firing to understand sound. This concept is known as place theory (Bekesy, 1960). We hear a specific pitch because cells at a specific place on the basilar membrane fire.
Place theory by itself, however, does not entirely explain the experience of hearing. One problem is that hair cells do not operate independently. Often, many are activated at the same time. We believe the brain is using additional information to code pitch perception. Frequency theory states that the brain also uses information related to the rate of cells firing. The more rapidly the cells fire, the higher the perception of pitch.
If the brain is interpreting the neural message based on how rapidly the cell is firing, the brain is relying on theory.
Imagine you are a psychologist who is interested in how different pitches are perceived. You spend a lot of your time looking at the location of activation on the basilar membrane. It is probable you are using theory in your investigations.
5.4 The Auditory Cortex
The auditory message has several stops on the way through the brain before arriving at the auditory cortex, which is located primarily in the temporal lobes. As in the visual system, there is a well-developed organisation so that the message can begin to be assembled. Different components of sound are organised and analysed in the Medial Geniculate of the thalamus. The network from the medial geniculate has several stops, but the majority of information is relayed to the Auditory Cortex in the temporal lobe. Some cells respond best to pure tones, while others respond to more complex sounds like patterns of speech. Much like the visual system uses the Retinotopic organisation, the auditory system maintains a tonotopic organization from the basilar membrane to the auditory cortex. The auditory system also has a what and a where stream. Within the brain, the auditory system has specialised neurons for transmitting sound. Because timing is so critical to understanding sound, the auditory system has cells with particularly rapid action potentials and abnormally large terminal buttons to help relay temporal components of the message.
Although we have only begun to study the auditory cortex more recently, we are discovering many similarities between the organisation for processing sound and those we have discovered for vision. The organisation is hierarchical, with simpler sounds like pure tones being processed in lower regions and more complicated sounds, like human speech, being processed higher up.
Visual Cortex is to Retinotopic as Auditory Cortex is to _______.
5.4.1 Sound Localisation
Among the most important pieces of information sound can provide is the location of objects in space. Many animals, including humans, use sound as a reliable cue for the location of objects. Just as your brain makes comparisons about the arrival of information in both eyes, your brain is able to locate objects from sound by comparing information arriving in both ears. Cues requiring comparisons between information from both ears are known as binaural cues.
We use two kinds of binaural cues for sound localisation. Interaural Time Differences are comparisons made between the arrival time of a sound in each ear. For a moment, assume you are in a crowded store and hear your friend calling your name from the right. The sound will arrive at your right ear a moment before it arrives at the left ear. By comparing the arrival times of different sounds, the brain is able to localise the sound from the right and left accurately. When a sound is in front of you, it will arrive at both ears at the same time.
Below is an example of what is known as a Binaural recording. Binaural uses two microphones, arranged to record sound in the approximate location of human ears. To correctly create this recording, a mannequin head is used to secure two microphones in each "ear". Ideally, they capture the 3-D soundscape, complete with information about the size of the room and location of sources of noise. The recording below requires the use of headphones, ideally ones with a designated right and left earpiece. Close your eyes and listen to the recording.
5.4.1A Demonstration: Binaural Recording
A second cue the brain uses is the intensity difference of the sound, also known as Interaural Level Differences between the ears. After a sound wave reaches the ear closest to the sound, the wave must travel through the head in order to reach the second ear. The head absorbs a small portion of the sound and the ear further away from the source of the noise will receive a message that is less intense. More simply, the ear closest to the sound will perceive the noise as slightly louder than the sound arriving at the second ear.
When comparing differences in arrival time between the two ears, we are using which of the following?
Interaural time differences
Interaural level differences
Lee is driving down the street and hears a siren on his left side. He can almost swear it sounds slightly louder to his left ear. It is likely that Lee is using which of the following?
Interaural Time Differences
Interaural Level Differences
5.4.2 Music and Speech Perception
Music and speech are two of the two most fundamental ways humans use the auditory sense. It is within this range that we have the highest acuity. Music can be used to tell a story and fundamentally influence mood. Music has also been shown to have deep-rooted physiological effects on the brain and the rest of the body. When people listen to pleasurable music, heart rate, muscle tension and respiration change and more blood is sent to regions of the brain associated with reward and motivation (Blood & Zatorre, 2001).
Among the more interesting components of music is the transition from frequencies and timing of notes into the perception of melody. You will realise there is nothing particularly special about a sequence of notes as they arrive at the ear, but it is experience and sequences of notes that give them coherence.
Among one of the more familiar experiences is when you can't get a particular song out of your head. This experience has several names. Oliver Sacks referred to this phenomenon as sticky music. The one we all tend to relate to best is the experience of the earworm; the more technical name is Involuntary Musical Imagery. It is officially defined as " the experience of an inability to dislodge a song and prevent it from repeating itself in one's head." (Beaman & Williams, 2010 pp. 638). Interestingly, the experience of the earworm, though irritating, can tell us quite a bit about audition. For instance, although it is rare for a song to remain stuck for more than 24 hours, the duration of the experience is typically much longer than an average auditory memory. Also interesting is that it doesn't take much to get an earworm going. Often only the first few notes of the song or even just a memory of the last time you heard the song can start the process, as I have just done in my own head. Research from this area may unlock clues about how memory and audition are linked in the brain.
Although all mammals and birds communicate with one another in some way, shape, or form, language and speech seems to be a uniquely human ability. Although some may balk at this statement, scientists spent a rich period of psychological investigation on the question whether nonhuman animals can create and use language. The results of these investigations opened doors to the minds of nonhuman animals and reframed what we believe nonhuman animals to be capable of understanding, but despite incredible efforts of brilliant minds, no other animal seems to be able to use language as we do. This is not to say however that speech perception is unique to humans. Other animals are often trained to respond to human speech and can make distinctions between subtle sounds in language.
The production of speech has three basic component parts: respiration from the lungs, the vocal cords, and the vocal tract. Correct and fluent speech requires a tremendous amount of coordination between these systems. Speech perception occurs rapidly in the brain. In a casual conversation, we produce 10-15 sounds per second. Although production of speech is better discussed in the context of the somatosensory system, the two systems work quite closely together in the production of speech, and most certainly during speech interpretation as well.
Not only do we process these sounds rapidly, but we also use inflections and variations in sounds to capture meaning. Speech researchers can tell you, however, that it is not just as simple as hearing sounds made by a speaker. Often sounds have similar signatures in the ear, such as the sounds "ba" and "pa." Just as we have discussed previously, the brain uses context and information from the visual system to help interpret information.
The McGurk Effect is a particularly salient example of how visual information can be used to help supplement the sounds coming into our ears. In the following video, this well-known effect is demonstrated. Participants hear the term "ba" but see a person making lip movements to "ba" or as if they were saying "fa". When you look at the person saying "ba", you hear the term "ba"; when you look at his lips when he says the word "fa", it changes the perception of what you hear.
What is Involuntary Musical Imagery?
When you associate specific images with certain songs
A disorder when certain songs bring uncomfortable or violent images to mind
The experience of a song getting stuck in your head
The technique used by musicians to translate visual sensations to auditory experiences
The study of Involuntary Musical Imagery is teaching us about which of the following?
How auditory and visual information are integrated in the brain
The relationship between music and the autonomic nervous system
The McGurk effect demonstrates which of the following?
The role the visual system plays in speech perception
The role the auditory system plays in perception of inflection
One of the ways the brain understands ambiguous sounds
5.5 Skin and Body Senses
The largest organ in the body is the skin. In biology class, you probably learned that the skin helps with thermoregulation and protects us from the environment. The skin also serves as the source of information about the surface qualities of objects. Think of the difference in firmness between a ripe peach and one that is under-ripe, or the texture of a piece of velvet and a piece of sandpaper. We are also able to understand information about where our body is in space; I know that I am sitting upright at my computer and not hanging upside down by my ankles because I am receiving information about the position of my limbs.
The physical message of touch is pressure. An object makes contact with the body and the receptor cells embedded in the skin respond, then the message travels up the spinal cord to the somatosensory cortex of the parietal lobe. Most information we gather about texture is derived from the responses of four types of mechanoreceptors located in the skin. The Merkel receptor and the Meissner corpuscle are located close to the surface of the skin and respond to pressure that is applied and then removed. The Merkel receptors fire continuously as long as the skin is making contact with an objects, sending information about fine details. It is therefore not surprising that there is a high concentration of Merkel receptors in the skin. The Meissner corpuscle fires when the skin first encounters the stimulus and when it is removed. Located deeper in the skin are the Ruffini cylinder and the Pacinian corpuscle. The Ruffini cylinder is associated with interpreting the stretching of the skin, while the Pacinian corpuscle feels vibration and texture.
There are several types of skin receptors: some respond to temperature, some to constant pressure, and others to intermittent pressure, like those on my fingers as I type. Information about different qualities of touch are transmitted and interpreted by the brain. While one fibre may carry information about the temperature, another will carry information about the texture, and a third carries information about firmness. The brain will combine these inputs to arrive at our perception of touch.
The somatosensory cortex organises information from the body. As we discussed in Chapter 3, there are several maps of your body on the surface of the parietal lobe. Similarly to the visual system, these maps are spatially organised - that is, two adjacent points of contact on your skin map to two adjacent points of neural activity on the cortex. This is known as somatotopic organisation. The brain does not prioritise messages from all parts of the body equally. We see a large portion of the cortex devoted to analysing information from your hands and your face, and only a small portion of the cortex analyses information from the torso and limbs. This makes sense when you consider that your hands, face, and to a lesser extent, feet, are the parts of the body we use to make contact with the world.
This image is known as the sensory homunculus. It is a visual depiction of what our bodies would look like, if they were they built in proportion to the representation on the cortex. As you can see, the face and hands are particularly large.
If you want to test your hand’s memory, there is a simple test you can perform. Get a friend and ask them to place several objects in a bag - perhaps avoiding things that are excessively sharp. Reach your hand into the bag without looking, and you will find that you are able to identify the objects perfectly, regardless of what your mischievous friend placed into the bag. This is because we learn the qualities of objects implicitly whenever we make contact with them. I understand that my coffee cup is not just a round mug, but that it is hot; beyond that, as I drink the coffee and the glass becomes lighter, my arm adjusts the amount of force I must use when I lift it.
Similarly to our other senses, our perception of an object depends not only on what we feel, but what we expect to feel. You are more likely to correctly identify an object, for example, when you expect to be touched on a particular finger (Craig, 1985), and our cortical blood flow changes when we expect to experience touch (Drevets et al., 1995).
Where is the Merkle Receptor located, and what does it do?
It is located deeper in the skin and senses vibrations.
It is located close to the surface of the skin and fires when pressure is first applied and when it is removed.
It is located deeper in the skin and sends messages about the stretching of the skin.
It is located close to the surface of the skin and fires continuously while the skin is in contact with an object.
Where is the Pacinian corpuscle located, and what does it do?
It is located deeper in the skin and senses vibrations.
It is located close to the surface of the skin and fires when pressure is first applied and when it is removed.
It is located deeper in the skin and sends messages about the stretching of the skin.
It is located close to the surface of the skin and fires continuously while the skin is in contact with an object.
The is located deeper in the skin and sends messages about the stretching of the skin.
Imagine you are looking at the cortex of an animal with a large portion devoted to its front feet and tail but little devotion to its back feet and nose. What prediction can you make about this animal's behaviour?
This animal probably spends a lot of time using its back feet to interact with the world.
This animal probably does not have a good sense of smell.
This animal probably does not use its tail particularly often.
This animal probably spends a lot of its time using its tail and front feet to investigate the world.
The subjective nature of touch seems to be a mystery. Take temperature, for instance: while some people can walk on hot coals or enjoy a bath that is so hot it turns their skin red, others will jump in freezing water for fun. A large part of this is because temperature perception is, in many ways, a relative perception. In summer, the temperature in Perth, Western Australia might range from 24 - 35 and so while it might feel cool in the evenings, the same temperature of 24 would feel warm during the autumn months. This is because our perception of temperature is dependent on what we are comparing the current stimulus to. Many of us have experienced this relativity, after coming in from the cold and washing our hands with room temperature water that seems to be scalding hot.
We sense temperature changes through both hot and cold thermoreceptors in the skin. Cold fibres respond by increasing their firing rate to objects that are cool to the touch while warm fibres increase firing to heat. These receptors also fire in response to chemical stimuli, which is why menthol has a cooling effect and you never want to rub your eyes after cutting hot peppers or touching mustard.
Pain is an adaptive response to tissue damage. It is also highly subjective, making it quite difficult to study. While some people report feeling great discomfort at the slightest exposure to painful stimulus, others report enjoying the feeling of receiving piercings and tattoos. It also seems to be highly dependent on expectation and enculturation. For instance, in cultures where the pain of childbirth is deemphasised fewer pain management interventions are used. When a person is distracted or engaged in a demanding task, the experience of pain is also reduced. For example, Robbins (2000) outlines a case of a patient at the University of Washington Burn Center. While having his bandages changed, the patient would wear a virtual reality helmet which displays an interactive virtual world. Because he was engaged in games occurring in the helmet, he reported significantly less pain.
Psychologists actually distinguish between three types of pain. Nociceptice pain is pain caused by nociceptors. Pain serves a purpose; when a limb has been damaged, we reduce the use of the limb because of the pain we experience. Interestingly, the experience of pain is highly context dependent. There are ample stories of marathon runners completing races on broken feet, or injured soldiers who report little to no pain after receiving a serious injury. How do we explain this?
220.127.116.11 Gate-Control Theory of Pain
Pain is only adaptive if it helps keep the organism alive. The Gate-Control model suggests that impulses that indicate painful stimuli can be blocked in the spinal cord by signals sent from the brain. When you are deeply engaged in a physical task like running a marathon or running from a predator, the brain prioritises mobility over responding to the source of the pain. We are not exactly sure how this is accomplished; however, the gate control model suggests that input happens along three pathways. Small diameter fibres (S-fibres) fire to damaging and painful stimuli. When S-fibres are active, a transmission cell becomes activated (T-cell). The intensity of the perception of pain in part depends on the excitation of the T-cell. The third part of the model are large diameter fibres, or L-fibres. These fibres send signals to the brain about stimulation that is not painful. When L-fibres are activated, they inhibit the activation of the T-cells. This closes the gate, which decreases the perception of pain (Melzack & Wall, 1965).
18.104.22.168 The Subjective Nature of Pain
Like so many other components of Perception, the experience of pain depends not only on the sensations from the world, but also what we expect to experience. Pain is particularly susceptible to the placebo effect (Finniss & Benedetti, 2005). That is, individuals report a significant reduction in pain after taking a pill that has no medicinal properties, suggesting that the alleviation of pain is really a result of expectations of pain reduction.
Life without pain?
There are individuals in the population who are unable to experience pain. It is perhaps through these individuals that we can best appreciate the importance, however unpleasant, of pain. The disorder results from a recessive allele on chromosome 2. In a study of three families from North Pakistan, Cox et al., (2006) reported that all had injuries on their mouth from self-inflicted bites, and many had fractured bones that were not noticed until movement was impaired. Interestingly, they all experienced normal sensations of touch, pressure, and pleasurable sensations. The experience of pain may be unpleasant, but does provide survival value.
5.6 The Kinesthetic Sense
A deeply guarded secret, that one typically only discovers in college classes, is that Sesame Street lied to you. We have more than five basic senses, but many of them work so well that we typically take them for granted. For instance, we have a kinesthetic sense, or a basic understanding of where our body is in space and how to move our bodies to accomplish specific tasks. Although our sense of kinesthetics relies quiet heavily on our sense of touch, other receptors are involved as well. Receptors in the joints and muscles both send and receive information about where the body is in space. Information from receptors is sent to the somatosensory cortex. Although there is a mountain of that we do not yet understand about the kinesthetic sense, we presume that the neural organisation works much like what we have seen in the visual system. That is, there are cells that fire when specific body parts are oriented in specific positions (Gardner & Costanzo, 1981).
5.7 The Vestibular Sense
The kinesthetic sense works closely with our sense of balance or our vestibular sense. The sensory cells of the vestibular system are located in the cochlea. Two structures respond not just to movement but also to posture and acceleration. The semicircular canals sense changes in acceleration and rotation of the head. The canals are filled with hair cells that respond to the force of gravity. The second organ, called the vestibular sacs, responds to cues associated with a sense of balance and posture.
The vestibular system is also closely integrated with the visual system. David Lee (1974) created a room with movable walls. When adult participants stood on a thin beam and watched the room move toward or away from them, they often felt the involuntary need to step from the beam to prevent falling. \
Bennie wants to test a hypotheses about the perception of temperature. He puts his left hand in a bowl of warm water and his right hand in a bowl of cold water. After a few moments he puts both hands into a bowl of water at room temperature water. How will the water in the third bowl feel?
The same to both hands, since it is the same stimulus
Warm to his left hand and cold to his right hand because of neural adaptation
Cold to his left hand and warm to his right hand because temperature is a relative judgement
It is not possible to know because temperature is subjective and feels different to everyone
How does gate-control theory suggest pain is blocked?
The spine blocks stressful stimuli when the sympathetic nervous system is active
The brain inhibits L-fibers to block pain signals
S-fibres inhibit the T-cells to block pain signals
S-fibres excite T-cells to block pain signals
The brain activates L-fibres to block pain signals
The sense provides information about the position of the body in space.
5.8 The Chemical Senses
Few human sensations are as powerful as the experience of a memory tied to olfaction and its closely related sense, taste. Smell is the only sense that does not first go through the thalamus; it is particularly old from an evolutionary stand point and plays a powerful role in our behaviour. Although we may recoil at a gory image, think of your reaction to smelling sour milk or rotting meat and compare that to the immediate response you have when smelling a delicious dinner cooking in the oven. We have an adaptive response to evaluate food using smell and an overwhelming emotional response that accompanies it. Similarly, smell plays an important role in how we choose our mates. Female preference for the smell of particular types of men changes as their chances of becoming pregnant increase (Gangestad & Thornhill, 1998).
Humans, as a species, are rather poor smellers. It is perhaps because of this that we understand so little about the brain's process of interpreting smell. In many ways the study of olfaction pushes the boundaries of philosophy and psychology; how do we study a sense we do not perceive particularly well? For comparison, anyone who has ever been camping at Yellowstone knows not to leave food in their car, regardless of how carefully it has been packed. A bear who is miles away can smell your sandwich and pick out its scent from the millions of other scents or odorants between itself and your car; he can navigate to the parking lot and pick your car from all the others. He can smell your food, although the windows are rolled up, it's in a cooler, and you even triple bagged it. The bear can find your sandwich as easily as you can see a red strawberry on a vine. It is important to note, though, that the bear does not smell better than you and I; he simply smells more. Although his brain is about 66% smaller than a human brain, his olfactory bulb is around five times larger than ours. So just as we live in a world translated primarily through the visual sense, a bear lives in a world of olfaction.
We can see the superiority of other animals' sense of smell. Dogs are used to find contraband and track criminals, and in Africa, giant rats have been used to help clear landmines and detect tuberculosis.
Why would you choose rats to detect landmines?
They are highly social and intelligent and therefore easy to train.
The are extremely sensitive to smell and have more genetic material dedicated to olfaction than any other mammal.
They can be trained to communicate with a human trainer when it smells a target.
Why are rats a good choice to detect cases of tuberculosis?
They are cheaper than trained techs
They are much more accurate than trained techs
They can do it more quickly than trained techs
This is a trick question to see if I watched the video; rats were not able to detect TB.
Perhaps what is most important to point out is that the rats were able to accomplish something with their noses that trained human technicians are not able to do with their eyes.
Perception of smell and taste begin with activation of chemoreceptors. These sensory cells respond to properties in air molecules that are interpreted as smell and taste. These two closely related senses are unique; they are the only senses that require you to ingest the physical stimuli in order to analyse the incoming information.
Airborne molecules interact with receptor sites in the mouth and nose and are drawn into the upper nasal cavity. Olfactory receptors bind to the cilia of hair cells embedded in the olfactory mucosa. This is where odourants will come into contact with the olfactory receptor neurons (ORN). Receptor cells send their messages to the olfactory bulb in the brain. The network becomes more complicated from here as it cascades to various regions of your brain.
Although we do not know a lot about the olfactory system, we know that ORNs are sensitive to specific odourants. Increasing the difficulty in studying smell, while you only have four types of photoreceptors to code vision, studies have shown that people have over 350 olfactory receptor types each responding to specific ranges of molecules (Buck & Axel, 1991).
Researchers have found that using these receptors, we can identify over 100,000 different odors. Each receptor seems to be specialised to code specific types of molecules, but smells are often made up of multiple molecules. Think about the smell of breakfast. You can smell the bacon cooking and fresh coffee brewing. The smell of coffee alone is created by more than 100 different molecules. You are able to distinguish those two smells from one another, even though all the different molecules of air arrive at your nose at the same time.
The ORNs send their signals to glomeruli in the olfactory bulb. These cells consolidate all the messages from a particular receptor type. That is, all 10,000 ORNs of a particular type will send their signals to just one or two glomeruli. There is yet one more mystery we can discuss before leaving the sense of smell. Although we do believe that ORNs are sensitive to specific molecules and that these molecules are associated with particular smells, it turns out that things are not this simple - because some molecules with similar structures create different perceptions of smell (Linster et al., 2001) and molecules with different structures can be interpreted as similar. Smell is also highly dependent on expectation. If for instance we give people the smell of "onion" labeled as "pizza", they will rate the smell more favourable than if it is labeled "body odour" (Hetz, 2003). Despite this, researchers have determined that there are links between the structure of molecules, activation on the ORNs and specific patterns of activation in the olfactory system.
There are over types of olfactory receptors.
Where are the olfactory receptor neurons located?
The olfactory bulb
The olfactory mucosa
The retronasal route
The taste buds
Which of the following statements is false?
The ORNs send their signals to glomeruli in the olfactory bulb.
Glomeruli consolidate all the messages from a particular receptor type.
All 10,000 ORNs of a particular type will send their signals to just one or two glomeruli.
Glomeruli join with taste molecules in the nasal cavity.
All of the above statements are true
As we have already discussed, both smell and taste require you to actually pull the stimuli into your body for analysis. For this reason, some people have referred to these two senses as "gatekeepers." They help us decide what we should ingest and what we should leave alone.
Taste relies on the correlation between the molecular properties of a substance and the effect of that substance on the body. For instance, many nutritious and high calorie foods are sweet. When the brain perceives sweetness on the tongue, not only do we decide to eat more of the substance, but the gastrointestinal system begins to prepare for ingestion of sweet foods. Conversely, consider your response if you taste something rotten or bitter. It is unlikely that you continue to eat.
In many ways, taste is just as elusive as olfaction. Researchers have identified five basic tastes that we seem to use in conjunction with our sense of smell to evaluate food. Most people are aware of the divisions of sweet, salty, sour, and bitter, but more recently, researchers have also identified the taste umami, which is best defined as savoury.
Taste begins on the tongue. If you take a moment to look at your tongue in the mirror you will see it is covered with little bumps called papillae: the location of our tastebuds. If you look a little closer, you will see that not all the bumps are the same; we have four categories of papillae. The first is the filiform papillae which are found over the entire surface of the tongue and give your tongue its "fuzzy" appearance. These are the only papillae that do not contain taste buds. On the tips and sides of your tongue, you will see the fungiform papillae, so named because they look like little mushrooms. Along the back of your tongue, you will see little folds known as the foliate papillae. Lastly, the circumvallate papillae are found on the back of your tongue and shaped like little mounds.
Which of the papillae do NOT contain tastebuds?
This is a trick question, all papillae have taste buds.
What is Umami?
A cell in the papillae that senses certain bitter chemicals
A cell in the olfactory mucosa that interacts with taste
One of the five basic tastes, associated with the perception of "savory"
A structure in the brain that controls aversion to toxic or unpleasant foods.
Each taste bud contains 50-100 taste sensitive cells, which protrude into a taste pore. Transduction occurs when chemicals bind to the receptor sites on the taste pore. From there, messages are sent not only to the brain, but also to the stomach, as your body begins to metabolically prepare for food.
Sensations from both smell and taste are combined in the orbitofrontal cortex (OFC). This region also receives information from the visual "what" pathway. It is for this reason that we think that the OFC contains bimodal neurons, or neurons that respond to more than one sense. These neurons seem to specialise in determining sensations that occur together. We also believe that because this is the first place that taste and smell combine, it is the location of flavour perception.
Where does the perception of flavour most probably occur?
The taste bud
The taste pore
Despite complexities, there are some species-specific aspects to taste perception. Infants prefer sweet things, avoid bitter flavours, and make a characteristic face when given lemon juice. Although there are several learned and cultural influences to our perceptions and preferences for taste, it is also clear that some preferences have their roots in the survival of our species.
Perception, like many areas of psychology, is rather an elusive area to study. We have already discussed the difficulty in trying to understand another person's experience, but as scientists, it is important to try. How do we begin to make sense of the experience of translating the external world to the internal experience of perception? The field of psychophysics attempts to evaluate the way the physical experiences of light, sound, and the chemicals in our nose are translated into psychological perceptions.
5.9.1 Stimulus Detection
Stimulus detection is a technique that attempts to answer the question "what is the minimum amount of stimulus required to generate a sensation?" Imagine you are sitting in a room that is absolutely dark. On the far wall, I display a dim light. Although you may not be able to perceive the light when it is at its lowest level, if I gradually make the light brighter, you will eventually see it. The point where you will see the light is known as the absolute threshold for the stimulus. The absolute threshold is the level of intensity required to create a conscious experience. It is also worth pointing out that the absolute threshold is not absolute. As it turns out, it can be quite different between individuals and circumstances. Absolute threshold is defined as the point of intensity required for a participant to detect the stimulus 50% of the time.
A second thing we try to take into account is individual biases or Signal Detection. Some people will report the presence of a stimulus, even when none has been presented. These individuals are much more likely to have a high "hit rate," which means that they are likely to detect more stimuli when they are presented. Because of their bias, they are also likely to say they that a stimulus was present when it was not presented. We call these instances "false alarms." Individuals with high hit rates and high false alarms are said to have a more liberal response bias. Contrarily, some individuals prefer to be certain that a stimulus was presented before they say they heard/saw/felt it. These people tend to have a higher miss rate. That is, they tend to say they did not perceive a stimulus, even when one was presented, but they also have a higher correct rejection rate, so they are more likely to say there was no stimulus when no stimulus was presented. This response pattern is often referred to as a conservative bias. Using signal detection, we can determine quite a bit about the stimulus under investigation and the sensitivity of each participant.
Below is a simple experiment for you to get a sense of Signal Detection Theory. Please take your time and be as accurate as possible.
There is an online detection theory calculator available here.
How did you do? Most likely, you had an easier time the first round. As you can probably see from your data, your sensitivity, or ability to detect differences, changed between the first try and the second. You will also notice that although some discriminations were easy to make, others were more difficult. When you encountered these, were you more likely to "play it safe" and say same or did you take a bet and guess different? Did you find discriminations were easier or more difficult after looking at the green screen?
Did you find that making discriminations were easier or more difficult after seeing the green screen?
What was your d'?
5.9.2 Difference Threshold
The last measure we will discuss is the difference threshold. This is the smallest amount of a particular stimulus required for a difference in magnitude to be detected. More simply, imagine that you are holding a one kilogram weight in both your right and left hand. Now imagine that I increased the weight in your right hand by 50 grams. Do you think you would notice this small difference between the two? Now imagine I increased the weight to two kilograms; it is much more likely that you will notice one weight is considerably heavier. The question we are interested in is: how small a weight can we add for you to just notice the difference?
It should not surprise you that determining the just noticeable difference (jnd) is based on several factors. Individual differences mean this value is not absolute, as some people are a bit more sensitive to increases in weight than others. But more importantly, the amount of stimuli already present also influences your ability to detect differences. Take the example above, and pretend we determined that you can notice the difference after 500 grams. Now I want you to imagine you are holding 100 kilograms. The odds are that an additional 500 grams will not be noticed. We have determined that the difference threshold depends on a ratio known as Weber's Law. This principle states that the ability to notice the difference between two stimuli is a constant proportion of the intensity or size of the stimulus. In more simple terms, the more intense the stimulus, the larger the required change to notice a difference.
Beaman, P.C., & Williams, T.I. (2010). Earworms (stuck song syndrome): Towards a natural history of intrusive thoughts. British Journal of Psychology, 101 (4), 637-635.
Blood, A. J. & Zatorre, R.J. (2001) Intensely pleasurable response to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of the National Academy of Sciences (US) 98 (20) 11818-11823
Buck, L. & Axel, R. (1991) A novel multigene family may encode odorant receptors: a molecular basis for odor recognition. Cell, 65, 175-187.
Cox, J. J., Reimann, F., Nicholas, A. K., Thornton, G., Roberts, E., et al., (2006). An SCN9A channelophathy causes congenital inability to experience pain. Nature, 444, 894-898.
Finniss, D. G. & Benedetti (2005) Machanisms of the placebo response and their impact on clinical trials and clinical practice. Pain, 114 3-6.
Gardner, E. P., & Constanzo, R. M., (1981). Properties of kinesthetic neruons in somatosensory cortex of awake monkeys. Brain Research, 214, 301-319.
Hepper, P.G., Scott, D. & Shahidulah, S. (1992) Newborn and fetal response to maternal voice. Journal of Reproductive and Infant Psychology, 11(3) 147-153.
Hubel, D.H., & Wiesel, T.N. (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology. 160, 106-154.
Linster, C., Johnson, B. A., Yue, E., Morse, A., Hungco, E.E., et al. (2001) Perceptual Correlates of Neural Representations Evoked by Odorant Enantiomers. Journal of Neuroscience, 21, 9837-9843.
Melzack R, Wall PD. (1965) Pain mechanisms: a new theory. Science 150, 971-979.
Mennella, J.A., Jagnow, C.P., Beauchamp, G.K. (2001) Prenatal and postnatal flavor learning by human infants. Pediatrics, 107(6) e88; DOI: 10.1542/peds.107.6.e88
Wandell, B.A., Dumoulin, S.O., Brewer, A.A., (2007) Visual Field Maps in the Human Cortex. Neuron, 56(2), 366-383.
Varendia, H. Porter, R.H., Wingberg, J. (1994) Does the newborn baby find the nipple by smell? The Lancet, 344, 989-990.
 Image courtesy of The Bridgeman Art Library, Object 140264 in the Public Domain.
 Image courtesy of Hans Bug in the Public Domain.
 Image courtesy of Piotr Siedlecki in the Public Domain.
 Image courtesy of Image Catalog in the Public Domain.
 Image courtesy of PDArt in the Public Domain.