DD303 Revision Flash Cards

Description

Degree DD303 Revision Flash Cards Flashcards on DD303 Revision Flash Cards, created by bluerose2 on 25/09/2013.
bluerose2
Flashcards by bluerose2, updated more than 1 year ago
bluerose2
Created by bluerose2 about 11 years ago
3625
8

Resource summary

Question Answer
Chapter 3: Perception dorsal and ventral pathways Infromatin from the primary visual cortex follows one of two pathways. The ventral pathway - the "what" system - leads to regions of the brain involved in pattern discrimination. The dorsal pathway - the "where" system - leads to regions of the brain specialised for analysis of information about the position and movement of objects. Norman's dual-process idea distinguishes the two and argues that visual processing can be both for action and for recognition: Ventral - concerned with recognition; processes fine detail; knowledge-based; receives data more slowly; available to consciousness; object-centred. Links to Marr/constructivist approach. Dorsal - drives visually-guided behaviour;processes motion; very short-term storage; receives data faster; less available to consciousness; drives action in relation to an object. Links to Gibson. Norman stresses that two streams are interconnected, not independent. Fitts (6.4) illustrates this with skill acquisition. There is an issue of whether the streams interact to such as extent that it makes no sense to think of them as functionally separate.
Chapter 3: Perception combining bottom-up and top-down processing Theories of perception vary in terms of bottom-up vs. top-down processing, with Marr and Gibson taking a botton-up approach and Gregory's constructivist theory a top-down approach. Perception is likely to contain elements of both types of processing. Hupe et al. combine the two on the basis that if one region of the brain sends a signal to another, the second region sends a signal back via a re-entrant pathway: * bottom-up processing produces a low-level description * this generates a higher level visual hypothesis * the accuracy of the hypothesis is assessed using re-entrant pathways. comparing the hypothesis with the low-level description. DiLollo et al. put forward a re-entrant processing explanation of the backward masking which occurs with four-dot patterns, not easily explained using standard explanations of masking effects. If a target is presented only briefly by the time a perceptual hypothesis is compared to the low-level bottom-up decription, the target has been replaced by the mask. The perceptual hypothesis is therefore rejected as it is based on a target which is different from the mask which is currently being subjected to bottom-up processing.
Chapter 3 : Perception the Gestalt approach Gestalt theory states that "the whole is greater then the sum of its parts", and so focuses on the organisation of the elements of an image, rather than the elements themselves. Perceptional organisation includes: * closure: we tend to close small gaps in an image * good communication: we interpet images in terms of smooth continuities rather than abrupt changes * proximity: elements close to each other tend to be grouped together * similarity: elements which are similar tend to be grouped together Similarity seems to have a stronger effect than proximity. This approach put forward the Law of Pragnanz: Koffka stated that "of several geometrically possible organisations, that one will actually occur which possesses the best, simplest and most stable shape". While there are counterparts to Gestalt demonstrations in the real world, this approach has been criticised for the artificiakity and simplification of the stimuli used.
Chzpter 3 : Perception Gibson's theory Gibson claims perception is direct, and cognitive processing unnecessary. it is an ecological approach, emphasising the wealth of information in the environment. he saw laboratory studies of perception, based on pictorial stimuli, as artificial. the ambient optic array refers to the structure imposed by light reflected by the textured surface of the world around us. additional information comes from invariants, e.g. the horizon ratio relation influences judgement of height, and the texture gradient provides information about distances, orientation and curvature. Flow patterns are. bought about by the observer. Motion parallax gives information about shape and position. Affordance refers to characteristics of an object which tell us about their use. However, his claim that we can interact directly with the environment through affordance has been challenged. he is also criticised for allowing no role for memory in perception, and for not providing a detailed account of how information is picked up from the environment.
Chapter 3 : Perception constructivist approaches Gregory takes a top-down approach, and argues that perception is influenced by stored knowledge. We generate perceptual hypothesis, and accept the one which best fits the data. There is support for this idea from visual illusions, e.g. the Muller-Lyer, and impoverished figures (e.g. ocean liner). The perception we reach on this basis is not always correct. In the mask of Hor example, although we know it is hollow, we continue to see a normal face, suggesting that in some cases there is a strong bias towards a false perception. Gregory explains this as a tendency to go with the most likely hypothesis. The theory provides a useful explanatory framework, but some areas are vague: * how do we generate hypothesis? * how do we decide which is correct? * why does knowledge sometimes not help perception? * why do we continue to accept a perception we know to be false?
Chapter 3 : Perception Marr's theory Marr's bottom-up theory was developed using computer-based models and algorithms, and focuses on the processes of object recognition. Perception is seen as a series of stages: * grey level: light intensity at every point in the retinal image is measured. * primal sketch: possible edges and textures are identified and used to develop a description of the outline of objects. * 2 1/2 D sketch: how surfaces relate to each other and the observer is described. * 3D object-centred description: object descriptions are developed, allowing recognition from any angle. There is some research support for Marr's theory, but also conflicting findings. Marr & Hildreth successfully used an algorithm to locate the edges of objects, though a successful computer simulation does not necessarily imply that human processing is the same. The findings of Young et al. support Marr's ideas about the integration of depth cues in the 2 1/2 D sketch. Marr's theory cannot adequately be reconciled with the separation of visual pathways into action (dorsal stream) and object recognition (ventral stream).
Chapter 4 : Recognition types of recognition According to Humphrey's & Bruce, recognition includes the stages: * perceptual classification - matches a structural description of input to a stored representation resulting in a familiarity judgement * semantic classification - retrieving information about the object and * identification - naming recognition Naming usually reflects a between-category classification for objects and within-category discrimination for people. Recognition (identification) of familiar and unfamiliar faces suggests different processes. Recognition of faces and emotions are also separable. 'Passive' processing (e.g. perceptual classification) is contrasted with more active, exploratory (haptic) processing (Lederman). Models of 2D (picture) recognition include template and feature recognition (e.g. Pandemonium). They can't explain object recognition. Theories of 3D-object recognition (Marr, Biederman) include constructing a 3D object-centred (structural) description. There is some evidence for category-specific impairments for objects (Humphreys) and also for the claim that face recognition might be special (possibly innate, the importance of configuration, and neuropsychological evidence).
Chapter 4 : Recognition Marr's theory of object recongnition This theory specifies the problems the brain must solve to identify features of objects and identifies different levels of analysis - computational, algorithmic and hardware. It takes a bottom-up stage approach: * grey-level * raw then full primal sketch: representation in terms of contours, edges and blobs (primitives) * 2 1/2 D sketch: a viewer-centred representation of the objects in a scene and of their positions (chapter 3, 4.3 and 4.4 and * a 3D object-centred representation - this is the basis for object recognition (identification). The 3D stage is based on a canonical framework (main axis) and components (also with their major axes, Fig 4.17). Occluding contours (contour generator) and concavities can generate the shape of an object. Hierarchical descriptions, built from primitives, have information about global and local features. Object recognition is by a matching of the 3D description to a catalogue of 3D models (representations in LTM) see Chapter 17, 2.1 for a discussion of computational modelling approaches).
Chapter 4 : Recognition evaluation of the theories of Marr and Biederman Marr's theory has been successfully tested by extensive computer modelling (AI) but without much consideration of hardware (neuronal) constraints. Experimental (Lawson) and cognition neuropsychological evidence (Warrington: Humphreys) show the importance of axis information for recognition; object recognition can be difficult if the axis is foreshortened in atypical views. Biederman and Gerhardstein found that object recognition could be primed across different views. This supports the claim for object-based representations, central to both theories. Moreover, priming is not found if more than one geon changes across views, thus supporting Beiderman. Complex objects in novel viewpoints can be difficult (Bulthoff & Edelman), showing that object-centred descriptions may not be sufficient for recognition, as is claimed by both theories. Within-category discriminations (face, person and animal identification), and the role of learning and expertise are difficult to explain for both theories.
Chapter 4 : Recognition Beiderman's theory In Biederman's recognition by components theory, a 3D representation is generated from geometrical units (geons): Biederman claimed that 36 were enough to describe a range of objects. Complex objects can be built from component geons divided by areas of concavity (as with Marr). Geons include blocks, wedges, cones and cylinders. They have key features which are invarient across 2D viewpoints. These 'non-accidential properties' (curvilinearity, parallelism, cotermination, symmetry, collinearity) enable 3D geons to be inferred from 2D representations, though the set of rules can also give inaccurate results (wheel example). The stage of getting an object-centred description from a view-dependent one is central to the accounts of both Marr and Beiderman.
Chapter 4 : Recognition neurological evidence Prosopagnosia is impaired visual recognition (naming) of familiar faces but with preserved naming of objects, a double dissociation (DD). Young found selective impairments for familiar faces, unfamiliar faces (tested by matching) and for expression. Evidence also shows a DD pattern for recognition of faces and expression. A prosopagnosic patient showed covert recognition (skin conductance response) but without overt (conscious) recognition (Bauer). The Capgras delusion is the 'mirror image' of impairments in prosopagnosia. Patients can recognise faces but deny their authenticity. Ellis & Young and Bauer propose different routes for the processing of identity (ventral visual-limbic pathway) and affect (dorsal visual-limbic, Fig 4.24), i.e. another DD pattern. ERP and fMRI studies show neural correlates of face processing, and studies show localisation of function, e.g. the fusiform face area in the posterior temporal lobes, especially the right. (see p148, Plate 6, Chapter 13 and offprint).
Chapter 4 : Recognition Young's diary study Over eight weeks, participants kept a diary record of mistakes in recognising people. A classification of error patterns was consistent with a simple three-stages model for face/person recognition similar to that for objects (Humphreys and Bruce, Fig 4.2). False judgements of familiarity, failure to recognise a familiar person, and failure to access any further information are all consistent with familiarity as a first stage (perceptual classification in Fig. 4.2). This leads to access of contextual and semantic details about the person in the next stage, and then naming (identification) as the final stage. Bruce and Young develop the stage-model in terms of face recognition units (FRUs), personal identity nodes (PINs) and additional assumptions like the cognitive system for evaluation (Fig 4.22). This (functional) model, supported by converging evidence from laboratory and everyday findings, is developed further in the IAC connectionist model. The IAC models can further account for a range of cognitive and neuropsychological data (sections 5 & 6).
Chapter 8 : Language processing segmenting the speech stream The ability to divide the speech stream develops rapidly as a result of exposure to language. Saffran et al. found that 8-month-old infants could begin to pick out words in an artifical language within 2 minutes of exposure. There are pre-lexical and lexical models of segmentation. Pre-lexical models rely on characteristics of the speeh stream. Silence is not enug, but can be useful. Rhythm is another one or more unstressed syllables, so the word boundary can be expected just before the stressed syllable, supported by the wordspotting task of Cutler & Norris; this cue allowed words within non-words to be spotted more easily. French and Japanese have different rhythmic units, and the use of these rhythms as a segmentation strategy has also been shown. However, rhythm will not identify all word boundaries, e.g. 'confess' begins with a non-stressed syllable. Lexical models such as TRACE rely on knowledge of what particular words sound like, i.e. phonological representation. Identifying each word in sequence allows prediction of the start of a new word, though as most words are short, this can not be done fast enough without backtracking. In the Saffran et al. study, the language used contained no rhythmic cues; it appears that the infants were using statistical information about the co-occurrence of syllables.
Chapter 8 : Langauage Processing TRACE TRACE is a connectionist model of spoken-word recognition, based on the IAC model of visual word recognition, usinf the acriviation and competition metaphor. It proposes continuously varying activiation levels. The speeach stream activates phonetic feature nodes, causing activiation of words which match the phonemic input so far. A word node has connections from all the phonemes in that word. If the phoneme nodes for that word are activated, activation preads to the word node. The degree of word node activation reflects the extent to which each word matches the incoming speech. There is competition between word activations; activation of a word node inhibits (i.e. decreases activation) of other word nodes. There is also competition between words which overlap to some extent, but through lexical competition, only one will remain active. This model is supported by McQueen et al. Participants took longer to identify the word 'mess' within 'duhmess' (where the first 2 syllables match the word 'domestic', so there is lexical competition) than in 'nuhmes' (there are no longer words which are competitors). the inhibtory link from the 'domestic' node makes the task more difficult.
Chapter 8 : Langauage procesing cohort model The lexical model is an example of parallel activation, in which speech is continuously evaluated and re-evaluated for the identity of each word. When the beginning of a word is encountered, the word-initial cohort is activated, i.e. the set of words which match the speech so far. As more of the word is heard, the possibilities are reduced until the uniqueness point is reached, where only one possibility remains, and recogntion is complete. Marslen-Wilson, using cross-model priming, showed that access to meaning can occur before full recognition. A prime like 'confess' triggers more rapid identification as a word of a sementically-rlated target like 'sin' than when prime and targer are not related. they found this was also true of partial primes like 'confe....'. This partial prime also facilitated identification of 'wedding', suggesting that the partial prime accesses both 'confess' and 'confetti', i.e. parallel activation. However, if many meanings are activiated, the priming effect is relatively weak, suggesting that activation of more than one meaning can only occur partically. The cohort model is one of dichotomous activation, i.e. words are either activated or not. later lexical models such as TRACE incorporate varying levels of activation, with competition between word activations.
Chapter 8 : Language processing semantic representations Once a word had been recognised, information about it must be accessed. Two theories offer explanations as to how this takes place: * spreading activation models: words are represented as nodes; links between nodes represnt semantic relationships. The Collins & Loftus model used different kinds of links for different relationships; other models of this kind simply connect words similar in meaning. * featural theory: word meanings ar representated as a set of semantic features. There is a large set of features in the nental lexicon, and each word representation conists of a ubset of these features. Priming is widely used to investigate the types of semantic information stored in the mental lexicon. Often assocaited words are used i.e. people's immediate associated links between words are stored in the mental lexicn. However, as there are many different types of association, research often looks at weak associations, but still with some semantic link. These non-assocaied links are weaker, but there is till evidence for a priming effect with same-category members, and instrument-action pairs. Kellenbach et al. also showed robust priming effect of the visual properties of the environment as tested using the ERP technique. However, associative, pure semantic, and perceptual knowledge may be accessed in different ways.
Chapter 8 : Language processing semantic ambiguity Activiating a word's meaning in the mental lexicon is more difficult if the word is ambiguous, e.g. a homonym (multiple unrelated meanings), polysemous (multiple related meanings) or syntactically ambiguous, e.g. noun/verb. Usually the sentence provides a context which aids selection. There are two theories about how this works: * autonomous: all meanings are accessed, then the compatiable meaning selected * interactive: the context of the sentence may rule out some possibilities before all meanings are fully accessed. Swinney asked participants to select one visual target related to a prime ( a homonym) heard in spoken text. Even when the text was strongly biased towards one meaning of the homonym, both related targets were primed, supporting the autonomous view, unless presentation was delayed by a secobd, when just the meanin relevant to the context was activated. Howeer, if a homonym has one particularly common meaning, and there is a strong contextual bias towards that meaning only one meaning may be actiated. Priming may also be stronger for contextually appropriate meaning, giving some support to the interactive model.
Chapter 8 : Language processing models of parsing As sentences are almost always new, sentence processing is not purely recognition but a constructive process. A mental model of the information being communicated is built up by parsing, i.e. deducing the grammatical or syntactical role of each word in a sentence. One issue is whether structure is assessed only at major syntatic boundaris or incrementally as each new word is recognised. There is more research support for it as an incremental process. * the garden path model suggests that the resolution of ambiguity is autonomous, i.e. based purely on syntactic information and assumes serial parsing. * constraint-based models suggest parsin is parrallel (more than one potential parse can be evaluated at the same time) and interactive (frequency and semantic plausibility can also have immdeiate effect on parsing). There is some support for autonom, though parsing can be affected by semantic plausibility when the semantic context is highly constrained. It can also be influenced by the rhythm of a sentance, pitch and timing, as well as information about how oftern words are used in different syntactic structures. Tanenhaus et al. showed the effect of the visual context. Syntactic constraints seem to operate in combination with other constraints.
Chapter 9 : concepts variation in categorisation and concepts Categorisation: categorisation may differe with circumstances; different theories may apply more easily in some circumstances: * classical theories: where definitions are important * prototype theories: where speed is important , or where there is uncertainty * theory-based views: when there is a need for explanation * psychological essentialism: scientific areas Smith & Sloman (3.1) support this: people's judgements were similarity-based or rule-based, depending on task presentation. The use of category words is not the same as categorisation, but influenced by language. Concepts: concepts also vary, and can be related to different theories: * classical theories: well-defined concepts * prototype theories: fuzzy concepts * theory-based theories, and essntialist approaches: where people have common-sense theories, and many scientific theories. Essentialism may also be particularly relevant to social categorisation. There is still the issue of how these different theories could be brought together in an overall theory of concepts. Categorisation can also vary among categorisers, depending on expertise (Medin et al., 3.3).
Chapter 9 : Concepts categories and concepts A concept is a general idea formed in the mind (i.e. internal) relating to every example of a class of things. A category is external, referring to things in the world. Categorisation involves grouping together objects, events and people, and responding to them in terms of their class membership. Categorisation behaviour provides evidence of the nature of concepts. Categories are not necessarily toed to language (e.g. categorisation demonstrated in Arabian horses). Neither are they clear-cut, e.g. the platypus example. They can have practical applications, e.g. the diagnosis of mental illness. Categorisation can be tested using sorting tasks, with the groupsing made being taken to represnt concepts, e.g. Rose & Murphy. Attribute listings, comparing large numbers of people, tells us about information used within a concept. How frequently an attribute is mentioned indicates how central it is to the concept. Concepts are the basic units of semantic memory. They are involved in language (lexicall conceots) and reasoning. They simplify the task of remembering information as they allow us to make inferences.
Chapter 9 : Concepts common-sense theories: the theory-based view Some theorists have suggested that categorisation involves larger knowledge structures, rather than similarity, central to classical and prototype of categorisation, but Hampson points to other factors: * lack of familiarity * instances where there is a clash between similarity and what makes an item technically a category member. Similarity is also problematic in that 'sharing of properties' needs to define which properties are relevant. Murphy & Medin suggest that concepts are explanatory-based, rather than similarity-based, related to background knowledge. This is supported by Rips (pizza and US quarter study). This theory has highlighted the difficulties of similarity-based theories and has research support. However, it is not clear what is meant here by 'theory', i.e. how this kind of theory might be formalised. The tem 'knowledge' may be more appropriate. there are also difficulties with the idea of explaining complex concepts through combining such theories, and of explaining how such combination might take place.
Chapter 9: Concepts the classical view of concepts Members of categories have certain properties in common; this is both mecessary and sufficient for category memberships. Early studies, e.g. Bruner et al., hae supported this view. However: * Tyicality: some category members re more typical than others. Rips et al. found that sentence verification was quicker for typical category members than atypical ones (robin vs. penguin). Rosch & Mervis found less typicla examples shared properties with fewer category members, sugesting that categories have a rich internal structure. * Borderline cases: there may be no clear0cut distinction between category members, e.g. red and orange. McCloskey & Glucksberg found inconsistency in category judgements. * Intransitivity of categorisation: Hampton found people's category judgements are not necessarily consistent with transitivity (e.g. Big Ben / furniture). * Lack of definitions: Wittgenstein propsed that most categories e.g. 'game', are not definable: there are similarities and relationships rather than properties in common for all examples. Putmam's thoguht experiement suggests even natural categories do not provide definitions.
Chapter 9 : Concepts prototype theories of concepts Typically effects suggest that conepts are organised around prototypes, i.e. a "best" category member: similarity to the prototype determines whther items are seen as category members. Unlike the classical view, this allows examples to be category members evens if there is a mismatch on some properties. It could explain category membership for categories that resist definition. Categorisation deoends on diagnosticities and weighting given to attributes. These theories can explain typically effects: * differences in typicality are explained by differences in weighting * faster verifications if typical instances occurs becuase the criteria for similarity are met after matching fewer properties than for atypical instances. * high typicality examples tend to match on high weighted values, so will also have properties which are widely shared. However, there is disagreement over the interprentation of tyicality effects (Armstong et al.). They are also affected by linguistic context (Roth & Shoben: milking cows), which cannot easily be explained by prototype theories.
Chapter 9: Concepts psychological essentialism Psychological essentialism proposes that people form categories on the basis of what they believe to be essential properties. It is an attempt to express the theory-based view more pecisely. While this idea is consistent with a lot of research evidence, there are also challenges to it (2.4): * Malt: categorisation of liquids as water did not depend entirely on belief about the presence / absence of water * Braisby et al: people made contradictory judgements in the cat / robot study * there ar mixed findings about the role of expert opinion: it may be more relevant to natural categories, i.e. here people are psychologically essentialist, than to artefact catergories (Malt). However, Braisby found that even in this kind of area, peoples judgements are not always open to the influence of experts, but may focus on non-essential properties. * what counts as essential is open to question (Gelman & Wellman's dog insides / outside example). * it does not provide a satisfactory explanation of complex concepts.
Chapter 14 : Cognition and emotion Schachter & Singer: cognitive appraisal Schachter & Singer propose that emotions arise as a result of our interpretation or cognitiive appraisal of our physiolocal reponses to events. Thus experience of emotion should change if our cognitie appraisal changes. In their classic study, participants were injected with adrenalin. Those expecting it to have no effect reported an emotional experience, consistent with the James-Lange theory, while those who were told it would make their heart race did not. This is in contradiction to the James-Lange theory, which does not involve cognition; arousal from the injection should reate an emotional experience. Participants were then put in a room with a stooge who behaved either in a very happy or a very angry way. The mood reported by participants reflected the mood of the stooge. There was thus a strong effect of context on the specific emotion experienced, again inconsistent with James-Lange. The results lend some support to th Schachter & Singer theory, but have not always been replicated. Moreover, the physiological responses assocaited with different emotions are not identical. Later appraisal theories have varied in terms of the critieria they suggest we use in making an appriasal, and some theorists, e.g. Zajonc, challenge the idea that appraisal is necessary for the experience of emotion.
Chapter 14 : Cognition and emotion mood and attention In the emotional Stroop task, participants are asked to identify the colours of emotional and neutral words, rather than colour words. Anxious individuals are slower to report the colour of anxity-related words, an example of an anxiety-related attentional bias. MacLeod et al. have also demonstrated this bias in their dot probe or attentional probe task. Anxious patients react more quickly than controls to a dot if it appears where a threatening word such as cancer, as opposed to a neutral word, was previously shown, i.e. they allocate attention to threat words. An anxiety-related attentional bias has been demonstrated with a range of materials, e.g. pictures and faces. It is strongest when the material matches the current concerns of the individual. As with memory, this bias could lead to a vicious cycle: paying more attention to threats could lead to the environment beng seen as threatening, which could lead to heightened anxiety, which could increase the attentional bias. It has been shown that attentional bias causes anxiety leveks to increase, so training to reduce the bia could help to reduce anxiety.
Chapter 14 : Cognition and emotion mood and memory In rests of mood congruent memory (MCM), Bower et al. showed that people remember more of a story which matches their mood (created by hypnosis) while reading it. We also have a bias towards remebering positive informationn, irrespective of mood. People with clinical depression show MCM effects in a bias for negative material, including their autobiographical memories. MCM can lead to a vicious cycle where depressed mood enhances the accessibility of negative memories, which in turn exacerbate depressed mood. Changing negative bias through mindfullness-based cognitive therapy could help break the cycle. Mood dependent memory (MDM) refers to memory being better if there is a match between mood at the time of the experience and at recall. This has been demonstrated by Bower, again using hynosis. He went on to develop semantic network theory to explain these memory effects. Emotions are represented as nodes in a network, which link to related semantic items, e.g. words, other emotions nodes and outputs such as behaviour and autonomic responses. Nodes are activated through external or internal stimuli, with activation spreading through the network. However, the MDM effect is not robust, and there are problems with the methods used to investigate it.
Chapter 14 : Cognition and emotion the function of emotions (1) a. Emotions alter goals: The evolutionary account of Oatley & Johnson-Laird uses the Big Five emotions. This theory suggests that emotion sognals that current behaviour should be interrupted to take account of a conflicting goal. EMotion brings about a cognitive adjustment of life goeal. If a current plan is frustrated, anger will lead to the behavioural response of either trying harder or showing aggression. However, the mechanisms involved are not specified. b. Emotions mobilise physiological responses: An example is the fight-or-flight response to fear. In modern life, an extreme physiological response to emotional threat is often inappropriate. However, there are still situations where a rapid physical response, especially to fear, is useful. Physical arousal, if maintained at an optimal level, can improve performance. The Yerkes-Dodson law suggests that lower levels of arousal optimise the performance of difficult tasks and higher levels of arousal facilitate easier tasks.
Chapter 14 : Cognition and emotion the function of emotions (2) c. Emotional expressions as communication: From an evolutionary standpoint, Darwin suggested that expressions of emtion comminicate the emotional status of an animal to others of it species (conspecifics). However, he believed that emotional expression in humans was no longer functional. Display rules for emotional expression differ between cultures, so humans have the ability to deceive, e.g. used for social manipulation. d. Emotions as information: In the Capgras delusion, a person believes that a relative or someone close to them has been replaced by an imposter or an alien. Normally, changes in SC (skin conductance) signal an emotional response to the face of someone we know, but these are absent in Capgras patients, i.e. there is no physiological feedback. Damasio proposed that emotions provide information to guide decision making. In his gambling task, people modify their strategy in response to reqards and fines without concious awareness of why they do so. He has extended this to explain life decisions. Emotional responses produce physiological changes (somantic markers) which guide choices in the gambling task and in our behaviour more broadly. These markers represent the 'gut feelings' that often guide our choices.
Chapter 14 : Cognition and emotion basic emotions Many theorists believe there are a small number of basic emotions, with a nroad consensus on the Big Five: anger; fear; sadness; disgust; and happiness. Other emotions are the result of different combinations of these. Ekman carried out cross-cultural studies, with surprise as a 6th basic emotion. When shown pictures consistently used similar labels to identify emotions. He asked people in New Guinea to show what their face would look like for each emotion, and photgraphed them. American students readily identified the emotions shown in the photos. He also studies infants n differenet cultures. His work provides evidence that emotions have genetic origins: * the universality shown in cross-cultural studies * the early emergence of emtional expression in infancy Additionally, blind children spontaneously produce emotional facial expressioms which can not have been learnerd. Researchers suggest that brain systems underlying basic emotions emergered far back in our evoluntionary past. PET and fMRI scans have identified the amygdala as involved in processing all types of emotions, espeically fea. The insula and the basal ganglia are implicated in recognising disust.
Chapter 16: Apllying cognitive psychology eye-witness identification: line-ups See also: Bruce & Young and IAC models of face recognition (chapter 4) and the encoding specificity principle (Tulving, chapter 6). Wells suggested 2 kinds of variables e.g. viewing distance when an event was witnessed and controllable system variables, procedures used to elicit information. Wells & Olson later added suspect-bias variables, relating to line-up composition, and general impairment variables, e.g. own-race bias. A meta-analysis carried out by Steblay et al. found that identification using sequential line-ups was more accurate than for simultaneous line-ups (sequential superiority effect). McQuiston-Surrett et al. found this was true of target-absent line-ups; however, in target-present line-ups, simultaneous line-ups were more accurate. Differences in methodology may account for some differences in findings. Instructions to witnesses are standarised, but identification decisions seem to vary with the line-up presentation. With simultaneous line-ups, a relative decision-making strategy may be used, based on the expectation that the perpertrator is present. In sequential line-ups, either 'live' or using video (e.g. VIPER), found by research to be very effective, the witness must make an absolute decision.
Chapter 16 : Applying cognitive psychology misinformation effect: cognitive explanations Loftus has shown that misleading information about events and faces can affect accuracy of recall (misinformation effect). This is greater with a longer delay between witnessed events and recall, lack of confidence in one's memory and personality factors, e.g. introversion. Loftus suggests misleading information distorts the orginal memory trace: new information is integrated into the orginal memory. However, McCloskey & Zaragoza claim that the orginal memory is not affected; rather, the misinformation effect is the result of response bias, where a witness draws on misleading information rather than the orginal information. Chandler et al. found interference effects disappeared over time, challenging Loftu's claim that misinformation distorts memory representation. The misinformation effect has also been explained in terms of source monitoring, i.e. failure to distinguish between what was actually experienced and what was only heard / read about, and to make a correct source attribution. This framework distinguishes between 3 soruces: reality monitoring; external source monitoring; internal source monitoring. Factors affecting source attribution include delay, contextual information and current goeals (Reyna & Lloyd). Remebering is a reconstructive process (Bartlett). Spiro suggests that memory distortion is likely when different sources of information are inconsistent, and schematic knowledge is used that may be closely related to the misleading information provided.
Chapter 16 : Applying cognitive psychology cogntive interview (CI) The CT, developed by Fisher & Geiselman, aims to elicit from witnesses as much accurate information as possible, and eliminate as far as possible factors (e.g. misleading information) which may limit or distort recall. Key features include: (1) establishing rapport with the witness; (2) asking for free recall of what they witnessed; (3) asking them to mentally reinstate the context; and (4) to recall in different orders and from different perspectives. (1) and (2) relate to social factors, and aim to enhance communication and reduce the power imbalance netween interviewer and witness. (3) and (4) relate to cognitive research, in particular the encoding specificity principle (Tulving, chapter 6). Stein & Memon found that compared with a standard interiew, the CI increased both the quantity and quality of information recalled. Some studies have found it produced a small increase in inaccurate information, perhaps the results of more information being recalled. Research has also shown it to be effective with children and the elderly. Some studies have shown that police interviews do not fully apply the CI; it is a time-consuming process. Dando et al. therefore developed a modified CI, asking witnesses to sketch the scene rather than mentally reinstating it. This was quicker than the CI, and the information elicited was just as accurate. It also has the advantage of asking witnesses to produce their own cues.
Chapter 16 : Applying cognitive psychology emotional intelligence Salovey & Mayer originally defined emotional intelligence as a very broad set of abilities. While Groleman has defined it as a set of skills distinct from cognitive intelligence, Mayer et al. developed a formal 4-branch model of emtional intelligence which combines cognition and emtion. Abilities are at different levels: from perceiving emotions (lowest levels); using emotions; understanding emotions; to managing emotions (highest level). These abilities mean that people are aware of their own and others emotions, and their consequences in different social circumstances. The link between cognition and emotion can be seen in cognitive-behavioural therapy, where negative cognitions are challenged and new behaviours introduced which will impact on emtional processing. Research using MSCEIT supports the Mayer et al. model, and has shown that emotional intelligence increases with age, suggesting it can be learnerd. The test also has predictive value (McEnrue & Groves), e.g. relating to leadership potential. However, it seems to measure some factors better than others, and it's validity for different cultures, ages and ethnic groups has not as yet been tested. It also uses self-assessment, so may not capture the complexities of social and interpersonal interaction. There is little agreement regarding the definition of emtional intelligence; the concept shares many of the difficulties of intelligence in general.
Chapter 16 : Applying cognitive psychology theories of intelligence Spearman suggested that intelligence was comprised of a general factor (g) and other specific factors (s). However, Sternberg argues the emergence of g in intelligence tests may be partly the result of confirmation bias: this is what the tests are designed to do. Other theorists have suggested that intelligence is made up of different components, e.g. the 7 components of Thurstone's theory. Cattell combined g with other dimensions in a 3-layer hierarchical model, revised in the Cattell-Horn-Carroll (CHC) theory. Layer 3 is g, layer 2 broad abilities (e.g. STM) and layer 1 narrow abilities (e.g. STM includes the narrow abilities of memory span, WM and learning abilities). This theory is well supported by research, but McGrew argues that it should be seen as a framework within which to describe reserch and stimulate new research. Sternberg's triarchic theory has 3 elements: the individuals internal world (cognitive processing), their experience and their external world. Experience mediates the interaction between their internal and external worlds. Intelligence cannot be fully assessed in any one environment, and must also relate to culture. Teh STAT has been widely used as a research tool. Sternberg claims that all children benefit from triarchic learning, combining cognitive analytical with practical experiential tasks.
Chapet 16 : Applying cognitive psycholgy intelligence: definition; links to attention and working memory (WM) Ther is no agreed definition of intelleigence; the focus is on cognition, though disagreement as to what cognitive components might be. There is some consensus that it includes (a) the ability to adapt to the environment and (b) the capacity to learn from experience. However, tests do not measure these components very well. It is sometimes defined operationally in terms of what a particular test measures. Theories do not define attention as a component of intelligence. However, Schweizer at al. found that all types of attentional processing (e.g. divided, focused) were related to intelligence test performance. Burns et al. found a near-perfect correlation between tests of attention (e.g. Stroop) and tests of cognitive ability (e.g. concept formation). There is research support for a relationship between measures of cognitive ability and working memory (cf the central executive component of WM as an attention system). Schweizer & Moosbrugger found tests of attention and WM together predicted intelligence test performance. Kullonen defined intelligence as WM capacity; however, a meta-analysis carried out by Ackerman et al. found a relatively low correlation between generla intelligence and WM. The differing results could be explained at least in part in terms of the different research methods used.
Show full summary Hide full summary

Similar

AS AQA Accounting Unit 1 - FLASH CARDS
Harshad Karia
AS Biology- OCR- Module 1 Cells Specification Analysis and Notes
Laura Perry
Lord of the Flies - CFE Higher English
Daniel Cormack
Evolution
rebeccachelsea
OCR AS CHEMISTRY A DEFINITIONS
awesome.lois
GCSE PE
alexis.hobbs99
Study Tips to Improve your Learning
miminoma
Using GoConqr to teach English literature
Sarah Egan
Genes, The Genetic Code, DNA and Chromosomes
Bee Brittain
The Skeleton and Muscles
james liew
SalesForce ADM 201 Study Quiz
Brianne Wright