Overcoming Language Variation in Sentiment Analysis with Social Attention

ACL 2017

Overcoming Language Variation in Sentiment Analysis with Social Attention

Jan 21, 2021
|
23 views
|
Details
Abstract: Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random, it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. The key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. We formalize this idea in a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author's position in the social network. This has the effect of smoothing the classification function across the social network, and makes it possible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. This model significantly improves the accuracies of sentiment analysis on Twitter and on review data. Authors: Yi Yang, Jacob Eisenstein (Georgia Institute of Technology)

Comments
loading...