Our technology is based on 15+ years of research in computational linguistics and computer science. We believe that you cannot make any significant progress unless you push the boundaries and break new ground. Our lab is constantly refining and advancing our algorithms and methodologies. We work in two main areas of research:
Distributional semantics, a research area in which we develop and study theories and methods for quantifying and categorising semantic similarities between linguistic items based on their distributional properties in large samples of language data. Our interest here is manifold: we work on algorithms for the effective acquisition of understanding from text, on rich and useful representations for linguistic content and situational context, and on applications of distributional models to real world tasks (obviously, mostly tasks of commercial interest to us).
Evaluation of learning language models, finding methods and metrics to test and compare algorithms, memory models, and processing approaches, both to benchmark improvements and to validate approaches with respect to tasks of interest.