Description: Cognitive science and artificial intelligence have an intertwined history. For example, more than half a century ago neural networks emerged from the interdisciplinary work done by psychologists and computer scientists — David Marr’s deep insights into vision and levels of analysis are a function of his knowledge in both of these broad scientific domains. In this project, we will use insights from human and animal cognition — i.e., that representations found in the brain have certain computational properties, and that the categorisation literature in children and adults provides certain frameworks for successful learning and generalisation. We will translate and apply these findings with the aim of improving deep learning models in terms of transparency, accountability, and training. One part of this project involves using the alignment of qualitatively different sources of data (e.g., images, words, and audio) to create richer multi-modal semantic spaces in deep neural networks. What this means is that we can take one type of data, like images, and compare and combine the space the model makes for these with the representational spaces that already exist for, e.g., words. This will pave the road towards better machine learning models as well as better models of human and animal cognition.
Collaboration with: Olivia Guest (RISE, Cyprus)
Techniques used: Deep learning, modelling.
Started: Feb. 2020
Publications: Savvas Karatsiolis and Andreas Kamilaris, Converting Image Labels to Meaningful and Information-Rich Embeddings, Proc. of the 10th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2021), Vienna, Austria, February, 2021.