Abstract
The universal flexibility of biological systems needs to be reflected in cognitive architecture. In PRIMs, we attempt to achieve flexibility through a bottom-up approach. Using contextual learning, randomly firing of a set of instantiated primitive operators are gradually organized into context-sensitive operator firing sequences (i.e., primordial “skills”). Based on this implementation, the preliminary results of the model simulated the averaged single-pattern processing latency that is consistent with infants’ differential focusing time in three theoretically controversial artificial language studies, namely Saffran, Aslin, and Newport (1996), Marcus, Vijayan, Rao, and Vishton (1999), and Gomez (2002). In our ongoing work, we are analyzing (a) whether the model can arrive at primordial “skills” adaptive to the trained tasks, and (b) whether the learned chunks mirror the trained patterns.
Original language | English |
---|---|
Title of host publication | Poster session presented at 18th International Conference on Cognitive Modeling |
Editors | Terrence C Stewart |
Publisher | Wiley |
Pages | 107-114 |
Number of pages | 8 |
ISBN (Print) | 978-0-9985082-4-5 |
Publication status | Published - 2020 |
Event | The 18th Annual Meeting of the International Conference on Cognitive Modelling - Toronto, Canada Duration: 22-Jul-2020 → 31-Jul-2020 |
Conference
Conference | The 18th Annual Meeting of the International Conference on Cognitive Modelling |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 22/07/2020 → 31/07/2020 |