Embarrassingly Simple Unsupervised Aspect Extraction

Stéphan Tulkens, Andreas van Cranenburgh

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    38 Citations (Scopus)
    159 Downloads (Pure)

    Abstract

    We present a simple but effective method for aspect identification in sentiment analysis. Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages. We introduce Contrastive Attention (CAt), a novel single-head attention mechanism based on an RBF kernel, which gives a considerable boost in performance and makes the model interpretable. Previous work relied on syntactic features and complex neural models. We show that given the simplicity of current benchmark datasets for aspect extraction, such complex models are not needed. The code to reproduce the experiments reported in this paper is available at https://github.com/clips/cat.
    Original languageEnglish
    Title of host publicationProceedings of the 58th Annual Meeting of the Association for Computational Linguistics
    EditorsDan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
    PublisherACL
    Pages3182-3187
    Number of pages6
    Publication statusPublished - 2020
    Event58th Annual Meeting of the Association for Computational Linguistics -
    Duration: 5-Jul-202010-Jul-2020

    Conference

    Conference58th Annual Meeting of the Association for Computational Linguistics
    Period05/07/202010/07/2020

    Fingerprint

    Dive into the research topics of 'Embarrassingly Simple Unsupervised Aspect Extraction'. Together they form a unique fingerprint.

    Cite this