A suite of tools for smooth and simple extraction of the essence from text, both streaming text online or your own data sets. Our tools give you instant insights into what is said and how, concepts and emotions as expressed in text.
Find out what is said online about your brand, how your marketing campaign is taken up by its audience or process customer surveys, employee questionnaires, or opinion polls. Our tools are built with a convenient web GUI for interactive use or with a customisable API for integration in your tools or services.
Keep track of everything that happens online with respect to your target of interest in one simple and convenient dashboard. Gavagai Monitor collects everything written in all public text streams in any language and reads it all, to create executive summaries for you, giving you instant insights into what is going on.
No need to avoid open text answers anymore - coding them just became much easier! Upload your survey responses and Gavagai Explorer will identify the most salient concepts and cluster the texts for you. You will be able to navigate through the responses, recluster, combine concepts, and find your insights in a fraction of the time previous coding schemes would afford. Save days or weeks of manual hard work using Gavagai Explorer!
With Gavagai Sentiment you can track emotions, attitudes, sentiment, or affect for anything and everything written online. Gavagai Sentiment analysis measures not only what is being said but also how it is expressed. You can set the analysis tool to give you alerts and notifications when the attitude online shifts notably. Gavagai Sentiment allows you to track one of a number of predefined sentiments, not limited to Positive and Negative, but also to define an attitude of interest for your business or PR needs.
We want to make our text analytics engine available for all techies and give our customers the posibility to integrate our services into their products and processes.
One of the strongest trends in Natural Language Processing (NLP) at the moment is the use of word embeddings, which are vectors whose relative similarities correlate with semantic similarity. Such vectors are used both as an end in itself (for computing similarities between terms), and as a representational basis for downstream NLP tasks like text classification, document clustering, part of speech tagging, named entity recognition, sentiment analysis, and so on. That this is a trend is obvious when looking at the proceedings from the recent large conferences in NLP, e.g. ACL or EMNLP. For the first time (ever), semantics was…
As we have done previously, we again followed Greek editorial and social media in the days preceding last week’s parliamentary election in Greece. The opinion polls published in the weeks before suggested a neck-to-neck race between the incumbent socialist Syriza party and the main contender, the conservative Nea Demokratia. Our friend Haralampos Karatzas took our numbers for analysis on his blog on Greek politics (in Swedish). They showed a very different picture: Syriza garnered more attention. As previously, Syriza, as a controversial party, also was the focus of stronger expressions of sentiment than any other party. Last time around, we…
We will present 3 research papers during this year’s EMNLP (Empirical Methods in Natural Language Processing) in Lisbon, Portugal. The first is a paper on comparing a support vector classifier to a lexicon-based approach for the task of detecting the stance categories speculation, contrast and conditional in English consumer reviews. The paper is called “Detecting Speculations, Contrasts and Conditionals in Consumer Reviews”, and will be presented on Thursday September 17 at the co-located workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA). The second paper investigates a method for factorizing distributional semantic models that produces state-of-the-art results on…
We presented a short paper at the 6th CLEF 2015 Conference and Labs of the Evaluation Forum in Toulouse on the deliberations from a workshop on Evaluating Learning Language Models we held last Fall with generous support from ELIAS. The presentation raised a fair bit of interest and several requests for a continuation workshop, and we are now motivated to continue by actually implementing the evaluation metrics suggested from the workshop.