Ganglion cells in the retina collects info to pass onto the optic nerve
Optic chiasm - left and right visual fields in both eyes -> split to corresponding hemispheres
LGN (subcortical) - cells that have similar receptive field properties as ganglion cells
V1 - first cortical visual area
Recap from Year 1 - Geniculostriate Pathway: Retinal and LGN receptive fields
On-centre/off-surround retinal ganglion cell vs off-centre/on-surround retinal ganglion cell
Part of the visual array in which stimulus within it will activate it -> excitatory or inhibitory effect on ganglion cell
Optimal stimulus -> light that matches the size of the retinal ganglion cell on area - light that covers both areas will lead to a null response
Recap from Year 1 - Geniculostriate Pathway: V1 receptive fields (Hubel and Wiesel, 1960s)
Simple cells -> receptive fields emerge from adjacentLGN field inputs
Most effective stimulus was a oriented bar -> orientation selectivity in striate cortex; some can be on/off-centre
Complex cells -> do not have spatiallyfixed inhibitory/excitatory regions
More dynamic, still shows orientationselectivity but across the visual field; receptive fields emerge from adjacentsimple cell field inputs
Orientation selectivity -> preference: normal distribution of neural response based on orientation
Recap from Year 1 - Geniculostriate Pathway: Columnar arrangement in V1
Systematic arrangement of orientation preference across the cortical surface
Oculardominance (left eye, right eye, ..) columns arranged perpendicular to orientation columns
One set of ocular dominance and one set of orientation columns form one 'hypercolumn'
Hypercolumn - a corticalprocessing module for a stimulus that falls within a particular retinal area
Low-Level Visual Processes - Feature Detection
Hierarchical model: increasing complexity from simple to complex cells -> small receptive field in V1 (simple, edges/lines), to large receptive field in V4 and IT (complex, objects/faces)
Hubel and Wisel (1979) -> doubts there is a single cell that recognises faces
What a single cell 'detects' -> high vs low contrast
Spikes per second and differing orientation - normal dist curve: high contrast = 2 crossings at 10 spikes; low contrast = 1 crossing at 10 spikes -> many variables impact way cell responds -> output of cell is ambiguous
Low-Level Visual Processes - Fourier Analysis
Early cells as part of independentchannels -> each channel conveys info contained in the image at specific spatialscale and orientation: high spatial frequencies (low scale) for visual detail; low spatial frequencies (high scale) for broad structure
Visual system deconstructs image into discrete channels before recombining
Removing high spatial frequency content -> blurry image, coarse luminance, large-scale structure
Removing lowspatial frequency content -> lose general image -> fineluminance, small-scale detail
Low-Level Visual Processes - Fourier Analysis
Deconstructing and Reconstructing images: complex signals can be constructed from simpler sinusoidal functions
Vision: can add together sinusoidal functions to create visual images -> decompose images into spatial frequency at differing orientation i.e., 2D Fourier Transformation, computer can reconstruct image
Low-Level Visual Processes - Fourier Analysis: Evidence in Humans - Spatial Frequency Channels
Measuring contrast sensitivity function - should be able to plot CSF based on contrast and spatial frequency -> n-shaped curve
Contrast sensitivity varies as a function of spatial frequency
Sensitivity max ~2-5cpd -> higher image contrast needed to detect high and low spatial frequencies
Low-level Visual Processes - Fourier Analysis: Spatial Frequency Channels
Blakemore and Campbell (1969) -> contrast threshold, adaptation
Ps moved contrast until just visible -> spatial frequency changed after 60s -> drop in contrast threshold - neurons habituated, less sensitive -> higher contrast needed
Threshold is increased (sensitivity decreased) for spatial frequencies similar to the adapting frequencies - implies existence of multiple, overlapping spatial frequency channels
DeValois (1982) -> contrast sensitivity of V1 cells in macaques
Low-Level Visual Processes - Summary
Feature Detection - single cells convey spatially local information, maps on well with functional organisation of V1
Fourier Analysis - distributed coding across many cells in different channels, maps on well with responses of some V1 cells
Both theories don't explain high-level perception i.e., recognising objects or faces
Low-Level Vision: David Marr's Computational Approach
Computational Level -> goal of the system, purpose, problem it solves
Algorithmic Level -> rules and representations that achieve this goal
implementational Level -> biological mechanisms that bring about algorithm
Computational Vision: vision serves multiple goals (where objects are, their shapes, how to interact), each of these goals relies on numerous algorithmic steps i.e., edgedetection involved in identifying object boundaries and structure
Low-Level Vision: Edge Detection
Marr's Model of Object Perception: grey-level representation (info at level of photoreceptors, like pixels) -> primalsketch (object boundaries) -> 2.5D sketch (element of depth perception) -> 3D model
Marr and Hildreth (1980) - Edge Detection: algorithm transforms image to highlight edges, assuming edges coincide with gradients in luminance
Low-Level Vision: Marr and Hildreth (1980)'s Edge Detection Steps:
Initial measurement -> luminance gradient on graph can implicitly separate edge
f'(x) -> peaks/valleys of intensitygradient - rate of change; difficult, have to decide intensity of peaks/valley you are interested in, susceptive to noise
f''(x) -> zerocrossings where there is a luminance gradient, edge shown from change from positive to negative; negatively affected by high frequency noise i.e., zero crossing where no meaningful gradient exists
Proposed a smoothing process
Low-Level Vision: Edge Detection - Smoothing Process
Equivalent to first blurring the image
Equivalent to removinghighfrequency content (i.e., analysis at coarse spatial scale)
Expressed as convolving image with Gaussian operator G -> each pixel is blurred with its neighbours; the sigma of G determines the blurring - greater level of sigma = greater level of blurring
However, this sequential process (luminance, one drop -> smoothed -> first derivative, normal dist -> second derivative, zero crossing) can be done in one, parallel step
Low-Level Vision - Edge Detection: Laplacian of Gaussian (LoG) filter
Convolving image with LoG filter achieves the same steps in one operation -> provides a zero crossing
2D LoG looks like an on-centre/off-surround retinal and LGN field - application of algorithm to biological mechanism
Filters should span a range of sizes for full range of scales and spatial frequencies - trade-off between noise removal (coaser scales, need larger filter) and edge enhancement (finer scales, smaller filter)
Spatial Coincidence Rule - zero crossing may just be an intensity change -> combining information
Low-Level Vision - Edge Detection
Retinal and LGNreceptive fields can be considered as spatial filters that compute the second derivative of an image
Location of zerocrossing in the output can be represented by simple cells in V1 -> presence of luminancegradient in terms of orientation preference
Low-Level Vision - Rapid Edge Detection
Paradiso and Nakayama (1991) - temporalmask paradigm; Ps view luminance target followed by annular mask -> Ps report a composite image -> what they perceive; presence of mask disrupts "filling in" process
Low-Level Vision - Edge Detection: Conclusion
Low-level visual processing can be characterised in the wider context of vision's computational goals
Computational Level - detecting an object's boundary and perceiving its structure
Algorithmic Level - Marr and Hildreth's model: convolving the image with a set of LoG filters and detecting the zerocrossings
Implementational Level - retinal and LGN cells perform the filtering, and simple cells in V1 detect zerocrossings
But, more complex edges exist -> occurrence of luminance-based edges
Low-Level Vision - First- and Second-Order Edge Perception
First-order edges -> defined by luminancegradient
Second-order edges -> no luminance difference, edges invisible to Marr-Hildreth model, defined by 'texture'
Julesz (1981) and 'texture segmentation' -> model based on local conspicuous image features -> textons
Influenced by feature detection and pre-attentive visual search -> individual neurons responding to specific areas within the visual field
Conspicuous features: oriented lines; line terminations; junctions (T and X)
Low-Level Vision - First- and Second-Order Edge Perception
Bergen and Julesz (1983): effortless segmentation - differences in conspicuous element; difficult segmentation - no difference
Nothdurft (1985) -> similarities between luminance segmentation (first-order) and texture segmentation (second-order): luminance segmentation - more difficult as element spacing increases; texture segmentation - more difficult with spacing and shortened length - not consistent with texton model
Suggests a mechanism more similar to an edge detection mechanism sensitive to spatial scale and orientation
Low-Level Vision - First- and Second-Order Edge Perception: Texture Gradient
Nothdurft drew similarities between luminance and texture segmentation -> may be achieved by evaluation of a gradient
For a given textural property (orientation), the determinant of segmentation performance is not simple the difference in that property from foreground to background, but the spatialgradient of the property across the textureboundary
Low-Level Vision - First- and Second-Order Edge Perception: Texture Segmentation
Bergen and Adelson (1988) - can be impaired/enhanced by changing the sizes of the elements -> suggests that simpler filtering processes can account for segmentation
Computational models of texture segmentation: spatial differences in orientation and spatial frequency statistics
Neurophysiology (Lamme, 1999) -> single cell recording in V1 of macaques: response enhancement at texture figure and edge, develops after initial response peak (beyond V1) -> supports edge-based segmentation process -> filling in
High-Level Vision - Visual Agnosia
Agnosia - a lack of knowledge; perception -> can see features i.e., orientation, colour, but cannot identify object
Associative - complete perception with inability to link object to memory
Apperceptive - disorder to perception, specific visual impairment, can recognise objects through touch
Integrative - not as simple as dissociable associative and apperceptive
High-Level Vision - Visual Agnosia
Lissauer (1889) - first to identify visually agnostic patient; distinguished between two stages of recognition: apperceptive and associative -> impairments to visual perception rather than impairments of intellect
Associative agnosia - 'normal percept stripped of its meaning' (Teuber, 1968) -> patient can copy model; visual perception is intact
Apperceptive agnosia - impairment in conscious visual representations -> cannot make a copy of a model
High-Level Vision - Integrative Agnosia
Riddoch and Humpherys (1987) - patient HJA passes apperceptive agnosia tests (copying) but shows higher order perceptual impairments
Reaction time for overlapping objects impaired
Impaired at discriminatingreal vs unreal objects
Performs better with fewer details i.e., drawings vs silhouettes
Potentially due to integration information across space
Indicates something more complex than a simple apperceptive/associative dissociation
High-Level Vision - Integrative Agnosia
Describes a high-level perceptual impairment in integrating the form and features of an object
Birmingham Object Recognition Battery - series of tests to identify level of processing at which recognition is impaired
Low-level features - size, length, orientation
Figure/ground formation - integrating info about overlapping features
Viewpoint invariant representation - identify object from other angle
Stored knowledge - real/unreal, describe/draw written work
Knowledge of function and between-object representation - semantically similar objects
High-Level Vision - Associative Agnosia
Associative agnosia may be explained by subtle, low-level sensory impairments
Ettlinger (1956) -> cerebral lesions either with or without visual field impairments and/or agnosia: those with just visualfield defect impaired performance, those with both perform at a similar level
Impairments in visual sensory abilities associated with visual fielddefects - but tests used didn't fully account for functional organisation
DeHaan (1995) repeated Ettlinger (1956) - focus on agnosia and taking visual abilities into consideration, using more appropriate tests: shape, location, colour, texture, and lightness discrimination, shape from motion, and line orientation
Group A - Agnosia -> some impairment, no marked difference to controls, some patients impaired in only one domain
Group B - No Agnosia -> most significant impairments
No evidence that these visual functions are necessary/sufficient enough to cause agnosia -> not dependent on low-level visuosensory impairments
High-Level Vision - Form Agnosia
Is apperceptive agnosia dependent on low-level visuosensory impairments - one example of apperceptive agnosia is visualform agnosia
Mr S (Benson and Greenberg, 1989) - failed basic apperceptive agnosia tests (copying, matching objects) -> impairment seemed specific to visual form perception -> could make discriminations based on overall luminance, colour
Term agnosia is questionable - not a lack of knowledge
High-Level Vision - Form Agnosia
Specific and selective impairment
Marr's Model (1982) -> impairment in primalsketch (edges, contours), which leads to a lack of object perception
Low-level? Performance generally intact with common small impairments
Campion and Latto (1985) - contrasts sensitivity in an agnosic (RC) - abnormal thresholds indicating a low-level sensory deficit (spatial frequency and orientation)
Pepperyfields defects - losses of very small parts of the visualfield
High-Level Vision - Form Agnosia
Peppery Field Defects: fine grained perimetry - rating brightness of single dots -> islands of visual loss shown
Agnosia explained peppered field defects (scotoma) -> loss of conscious experience in small parts of visual field -> not form-specific impairment
Simulated visual field losses in neurotypicals using mask -> contrastthresholds similar to agnosia
High-Level Vision - Functional Organisation: Ventral and Dorsal Stream
Milner and Goodale (1991) - different dissociation: DF profound visualform agnosia accompanied by other deficits (brightness, motion, depth), but largely intact low-level vision - damage in occiptal-temporal cortex (ventral stream)
High-Level Vision - Functional Organisation: Ventral and Dorsal Stream
Milner and Goodale (1991) - DF did not have conscious access to visual object information but could use it to support different tasks: matching - could not match card to orientation of postbox; posting - little difficulty in action -> conscious visual access to matching vs completing action
Dorsal - visuomotor interaction, egocentric, no access to memory, unconscious
Ventral - object recognition, access to memory, conscious, allocentric
Visual agnosia not explained by lower level deficit
High-Level Vision - Functional Organisation: Dissociation in Neurotypicals
Malach (1995) - fMRI in response to images of objects or textures: response in lateraloccipital cortex in response to objects, response does not distinguish familiarity, dissociation between object- and non-object-recognition
Culham (2003) - fMRI in visually guided grasping: response in anteriorintraparietal sulcus, not associated with object recognition - intact in DF even though they do not have conscious access
Evidence of neural dissociation in neurotypicals
High-Level Vision - Functional Organisation: Dissociation in Neurotypicals
Aglioti (& Goodale) (1995) - used visual illusions to make central circles appear larger or smaller, measured maximum gripaperture during visually guided grasp
Maximum grip aperture scales with physical size, not perceived size -> can change the perceptual experience (ventral) but this does not change the interaction with visually guided experience
Some interaction between ventral and dorsal to guide visually guided actions -> DF had issues in posting complex shapes i.e., T
High-Level Vision - Modular Processing
Faces are processed in a unique, holistic way -> only for upright faces and not for any non-face objects (Thompson, 1980)
Damage to the ventral system can result in prosopagnosia - can be unimpaired in recognising other objects
Kanwisher (1997) - modular: ventral stream is organised to recognise faces - activity in Fusiform Face Area, not explained by differences in low-level features and attention effects
High-Level Vision - Modular Processing
Recognising exemplars/expertise effect (Geuthier (2000) -> expects in other object recognition (i.e., birds, cars) get activation in FFA -> discrimination of differences
Expertise effects not restricted to FFA and can be confounded with attention (Harel et al., 2010)
Can subdivide sections of the ventral system to recognise different objects i.e., faces, places, limbs -> maps on well to specific impairments/case studies
Low-Level Hearing - Sound Waves
Auditory Perception begins with sound waves
Sound waves are variations in air pressure
Sound waves are longitudinal waves
Wave form graph - increases and decreases in presuure
The physical properties of the sound wave determine perceptual qualities
Amplitude - loudness; quieter/softer when there is less pressure
Frequency - pitch
The relationship between physical and perceptual qualities is not entirely straightforward
Low-Level Hearing - Fourier Analysis
Everyday waveforms are not sinusoidal -> any complex sound waveform can be created using a finite number of sinusoids -> infer that the early auditory system breaks down the sound waves into several sinusoids
Wave forms often plotted as frequencyspectra, showing the level of each frequency present in a sound
Lowest frequency present is called the fundamental frequency, f(0), e.g., 200Hz
Harmonics are integermultiples of f(0) i.e., 400Hz, 600Hz
Spectrograms show frequency spectrum over time - shifting frequency up -> perception of increased pitch
Low-Level Hearing - Auditory Pathway
Delivering the sound stimulus to the receptor
Converting the physical stimulus into an electrical signal
Inferringperceptualqualities (e.g., loudness, pitch) from electrical signals