Martha Lewis
NIAS-Lorentz Team Group Fellow
Project title
Neurosymbolic Models of Analogy
Research question
What representations do AI models need in order to reason analogically?
Project description
While large language models (LLMs) are undeniably successful, there are also areas in which they fail. These include compositional generalisation, abstraction, and analogical reasoning. In contrast, humans – including children – perform these tasks with ease.
Humans are able to perform these tasks because we can manipulate and reason with symbols, whereas evidence suggests that LLMs rely largely on shallow pattern matching. However, an emerging paradigm seeks to incorporate symbolic structure into neural systems: neurosymbolic AI. Neurosymbolic models have interpretable structure and are less susceptible to solving problems based on surface similarities. At the same time, they benefit from graded neural representations that can be learned from data.
With this project, Martha Lewis will examine how symbolic theories of analogical reasoning can be realised in neural networks. To map from symbolic to neural models, she will use methods such as Tensor Product Representations, categorical compositional distributional semantics, and Logic Tensor Networks.
Each approach makes different commitments regarding concept representation and composition, thereby contributing to our understanding of what is required for analogical reasoning.
Selected publications
- Guo, Z., Xue, C., Xu, Z., Bo, H., Ye, Y., Pierrehumbert, J. B., & Lewis, M. (2025). Quantifying Compositionality of Classic and State-of-the-Art Embeddings. Findings of the 2025 Conference on Empirical Methods in Natural Language Processing.
- Lewis, M. & Mitchell, M. (2025) Evaluating the Robustness of Analogical Reasoning in Large Language Models. Transactions of Machine Learning Research
- Vegner, I., de Souza, S., Forch, V., Lewis, M., and Doumas L. A. A. (2025). Behavioural vs. Representational Systematicity in End-to-End Models: An Opinionated Survey. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 31842–31856, Vienna, Austria.
- Tong, X., Choenni, R., Lewis, M., & Shutova,E.(2024). Metaphor Understanding Challenge Dataset for LLMs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume1: Long Papers), pages 3517–3536, Bangkok, Thailand.