Socially shared perception and metacognition
Perceptual experience provides rational support for actions, beliefs, and intentions. When you see a banana as yellow, that perceptual experience makes it reasonable for you to believe that the banana is yellow. But decades of research show that other people have an implicit impact on individual perception and cognition. This influence even occurs with the mere presence of others: we unconsciously and spontaneously encode others’ perceptual perspectives and shift our frame of reference accordingly. Theories of human perception and attention, however, have been predominantly centred on the solitary mind. What iis the epistemic standing of socially influenced perception?
To answer this question, one of the key factors that must be addressed is conscious access. If you are unaware that others distort your perceptual experience, are you responsible for the beliefs and behaviours based on that experience? What is, if any, the epistemic responsibility of other agents in distorting, or aiding, an individual’s perception? Building on current models of perceptual metacognition, the project will (1) develop a principled way to determine experimentally whether social influences on perception are still minimally accessible to consciousness, and (2) examining how these social influences and their metacognitive profile modulate the epistemic role of perceptual experience in justifying actions and beliefs.
Collective perception and the sense of reality
We can differentiate veridical perceptual experiences from imagination and hallucination, because veridical perception comes together with a feeling that we are perceiving the real world: a sense of reality. But what makes an experience feel real? In both analytic and phenomenological philosophy traditions, perceptual social interaction has been considered an important step for the idea of a shared objective reality. However, most philosophical accounts of the sense of reality in perception take an individualistic approach.
The aim of this project is to build on the proposal that we directly perceive the content of another’s perspective, and examine how a sense of reality is constructed through shared perception. I will critically examine the claim that the ability to coordinate my perception to an object together with another individual, goes hand in hand with the ability to experience objects as real, and differentiate them from hallucinations and imagery.
Joint Attention: Perception and Other Minds
From care-giver and infant playing with a toy, to singing duets or playing basketball, we frequently and effortlessly coordinate our attention with others towards a common focus. Joint attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other and coordinate our actions.
This project aims is to elucidate the relational nature of joint attention, and its functional significance for social cognition, including cases involving different sense modalities and more complex forms of joint activities.
The project's subgoals are to:
- clarify the role of perceptual experience for characterising joint attention;
- propose a functional framework to assess multisensory contributions to establishing and maintaining joint attention;
- test the hypothesis that engaging in joint attention can affect the processing of multisensory information.
Battich, L., Garzorz, I., Wahn, B., & Deroy, O. (2021). The impact of joint attention on the sound-induced flash illusions. Attention, Perception, & Psychophysics, 83(8), 3056–3068. doi: 10.3758/s13414-021-02347-5. pdf
Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information dur-ing multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent num-ber of visual flashes and auditory beeps results in a single flash being seen as two (fission illu-sion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint atten-tion. Our results reveal the limitations of the theory that joint attention enhances visual pro-cessing as it does not affect temporal audiovisual integration.
Joint attention customarily refers to the coordinated focus of attention between two or more individuals on a common object or event, where it is mutually "open" to all attenders that they are so engaged. We identify two broad approaches to analyse joint attention, one in terms of cognitive notions like common knowledge and common awareness, and one according to which joint attention is fundamentally a primitive phenomenon of sensory experience. John Campbell's relational theory is a prominent representative of the latter approach, and the main focus of this paper. We argue that Campbell's theory is problematic for a variety of reasons, through which runs a common thread: most of the problems that the theory is faced with arise from the relational view of perception that he endorses, and, more generally, they suggest that perceptual experience is not sufficient for an analysis of joint attention.
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other’s mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world.
The field of language evolution has recently made Gricean pragmatics central to its task, particularly within comparative studies between human and non-human primate communication. The standard model of Gricean communication requires a set of complex cognitive abilities, such as belief attribution and understanding nested higher-order mental states. On this model, non-human primate communication is then of a radically different kind to ours. Moreover, the cognitive demands in the standard view are also too high for human infants, who nevertheless do engage in communication. In this paper I critically assess the standard view and contrast it with an alternative, minimal model of Gricean communication recently advanced by Richard Moore. I then raise two objections to the minimal model. The upshot is that this model is conceptually unstable and fails to constitute a suitable alternative as a middle ground between full-fledged human communication and simpler forms of non-human animal communication.
Opening up the Openness of Joint Attention. Draft PDF
The ability to engage in joint attention, in which two individuals attend to the same object or event together, is considered fundamental for language learning, for understanding others and for joint actions. Joint attention is often defined as a mutually open, or transparent relation between co-attenders. But how should this openness be characterised? Two broad theoretical views have been proposed. One view reductively accounts for the mutual awareness characteristic of joint attention in terms of individual mental states and properties. According to non-reductive views, in contrast, mutual awareness is based on some primitive intersubjective relation, which is irreducible to the individual states of its relata. I argue that tensions in these approaches arise from the attempt to address both normative and cognitive explananda simultaneously. Both approaches are primarily designed to tackle the normative epistemological concerns of joint attention, and their problems arise when they conflate these concerns with psychological ones. Drawing from evidence in developmental and cognitive psychology, I outline the case for a cognitive-first account of joint attention based on a weaker notion of openness and mutual awareness. I conclude by assessing the epistemic implications of this account.