Explicit Semantic Analysis (ESA)


有别于LSA (Latent Semantic Analysis), 下列文章提出一种ESA (Explicit Semantic Analysis), 并介绍怎样使用ESA来进行语义相关性和文本分类工作。 文章的基本思路事实上也非常easy。就是基于wikipedia站点内容,生成每个以前出如今wikipedia文章中的单词的语义表示。

每个单词的语义表示是一个高维向量。 而相应的每个维就是wikipedia中的concept。基于单词的语义表示, 进一步能够得到文本串和文档的语义表示。如作者描写叙述,这种语义表示,对于短文本的语义处理非常有帮助。

而且,对于多义词。语义表示本身就能够提供消歧的可能。在上下文中,通过上下文词语的语义表示,多义词在该上下文中正确语义部分得到强化从而实现语义消歧。


Wikipedia-based Semantic Interpretation for Natural Language Processing
http://www.aaai.org/Papers/JAIR/Vol34/JAIR-3413.pdf

Evgeniy Gabrilovich and Shaul Markovitch. 


Abstract
Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text catego- rization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.

原文地址:https://www.cnblogs.com/zsychanpin/p/7093176.html