Back to list

Detail of contribution

Auteur: Elizabeth SHIRLEY

Co-Auteur(s): Professor James Douglas SADDY, Dr Vitor C ZIMMERER

Processing of Lindenmayer Grammars in an Artificial Grammar Learning Task

Abstract/Résumé: Identifying the cognitive processes that allow humans to extract syntactic structures is a key question for linguistic researchers. The artificial grammar learning (AGL; Reber, 1967) paradigm allows investigation of syntactic processing in the absence of lexical-semantic demands. AGL tasks consist of a training phase during which the participant is exposed to stimulus sequences generated by a set of rules. Learning is assessed during a test phase in which the participant discriminates between novel sequences generated by the same set of rules (grammatical) and sequences that violate these rules (ungrammatical). Numerous studies have produced evidence for learning in the visual (letters, abstract shapes) and auditory (syllables, tones) sensory modalities. However, there is ongoing debate as to the type of knowledge acquired (Pothos, 2007). Originally considered evidence for abstract rule learning, discrimination of AGL sequences can potentially be accounted for by alternative strategies such as sequence similarity, template matching or extraction of transition probabilities. In our study we employed Lindenmayer grammars (L-grammars; Lindenmayer, 1968) which generate hierarchically structured sequences using recursive transformation rules. We trained participants on the simplest context-free deterministic system (Fibonacci grammar), having generated the output as an auditory sequence. After training participants correctly discriminated novel grammatical strings from strings generated by another L-grammar, implying that they had formed a successful representation of the Fibonacci grammar higher order structure. They repeated this success when tested with pseudo-grammatical sequences (“pseudo-Fib”), created by concatenating small Fibonacci-structured chunks in a random order. To our knowledge this is the first language processing study to use sequences generated by L-systems. Our results demonstrate successful extraction of structure from L-system sequences. We discuss possible cognitive strategies facilitating such extraction, and how well our data can be accounted for by existing models of syntactic processing.