The universal flexibility of biological systems needs to be reflected in cognitive architecture. In PRIMs, we attempt to achieve flexibility through a bottom-up approach. Using contextual learning, randomly firing of a set of instantiated primitive operators are gradually organized into context-sensitive operator firing sequences (i.e., primordial “skills”). Based on this implementation, the preliminary results of the model simulated the averaged single-pattern processing latency that is consistent with infants’ differential focusing time in three theoretically controversial artificial language studies, namely Saffran, Aslin, and Newport (1996), Marcus, Vijayan, Rao, and Vishton (1999), and Gomez (2002). In our ongoing work, we are analyzing (a) whether the model can arrive at primordial “skills” adaptive to the trained tasks, and (b) whether the learned chunks mirror the trained patterns.
|Title of host publication||Poster session presented at 18th International Conference on Cognitive Modeling|
|Publication status||E-pub ahead of print - 2020|
|Event||The 18th Annual Meeting of the International Conference on Cognitive Modelling - Toronto, Canada|
Duration: 22-Jul-2020 → 31-Jul-2020
|Conference||The 18th Annual Meeting of the International Conference on Cognitive Modelling|
|Period||22/07/2020 → 31/07/2020|