- PI, 2017-2021: AMORE: A distributional MOdel of Reference to Entities (EU, European Research Council Starting Grant 715154, starting February 2017).
Abstract: The ability to use language to refer to reality ("that big tree over there") is crucial for humans, and yet it is very difficult to model computationally. AMORE breaks new ground in Computational Linguistics, Linguistics, and Artificial Intelligence by developing a model of reference to entities implemented as a computational system that can learn its own representations from data. AMORE attempts to unify two traditions (symbolic AI/Linguistics and continuous approaches, specifically deep learning) into a unified, scalable model of reference that operates with individuated referents and links them to referential expressions characterized by rich descriptive content. We are developing a neural network version of a formal semantic framework that is furthermore able to integrate perceptual (visual) and linguistic information about entities.
This project has received funding from the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation programme (grant agreement No 715154).
past projects (from 2011 onward only)
- PI, 2015-2016: LOVe: Linking Objects to Vectors in distributional semantics: A framework to anchor corpus-based meaning representations to the external world (EU, Marie Skłodowska-Curie project 655577, H2020-MSCA-IF-2014).
Abstract: Language mediates between concepts in our mind and the things they refer to in the world. Semantic theories are typically biased towards conceptual or referential aspects. My goal is to develop a theory of meaning that takes both aspects into account, and is supported by computational modeling experiments, enabling computers to match linguistic expressions with entities in the world. My model is based on distributional semantics, a scalable and flexible approach to computational semantics that, by inducing meaning representations from naturally occurring data with statistical methods, can model large portions of the lexicon and account for nuances in meaning that pose difficulties to traditional semantic theories. Distributional semantics has so far largely eschewed the reference issue, by testing its models on language-internal tasks. The project bridges this language-world gap, and integrates the distributional framework into a referential semantic theory. LOVe promises to advance our scientific understanding of language and make significant progress towards building computers we can talk to.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 655577.
- Collaborating researcher, 2014-2015: Connecting Conceptual and Referential Models of Meaning (MINECO, FFI2013-41301-P).
- Member, 2015: SIGGRAM: “Significado y Gramática” (MINECO Redes de Excelencia Program grant FFI2014-51675-REDT), PI: Louise McNally, Universitat Pompeu Fabra.
- Collaborating researcher, 2012-2014: Statistical Relational Learning and Script Induction for Textual Inference (DARPA DEFT program under AFRL grant FA8750-13-2-0026).
- Collaborating researcher, 2011-2013: OntoSem 2: Natural language ontology and the semantic representation of abstract objects 2 (MICINN, FFI2010-15006).
- Collaborating researcher, 2009-2013: Consolidated research group with funding GPLN: Grup de Processament del Llenguatge Natural (AGAUR, 2009 SGR 1082).
- Member, 2008-2013: PASCAL 2: Pattern Analysis, Statistical Modelling, and Computational Learning 2 (EU Network of Excellence).
- Collaborating researcher, 2011-2012: REDISIM: A distributional semantic model for fully recursive phrasal meaning (MICINN, FFI2010-09464-E).
- Collaborating researcher, 2009-2012: KNOW-II:
Language understanding technologies for multilingual
domain-oriented information access (MICINN,