Have a personal or library account? Click to login
Neural Network Models for Word Sense Disambiguation: An Overview Cover

Neural Network Models for Word Sense Disambiguation: An Overview

By: Alexander Popov  
Open Access
|Mar 2018

Abstract

The following article presents an overview of the use of artificial neural networks for the task of Word Sense Disambiguation (WSD). More specifically, it surveys the advances in neural language models in recent years that have resulted in methods for the effective distributed representation of linguistic units. Such representations – word embeddings, context embeddings, sense embeddings – can be effectively applied for WSD purposes, as they encode rich semantic information, especially in conjunction with recurrent neural networks, which are able to capture long-distance relations encoded in word order, syntax, information structuring.

DOI: https://doi.org/10.2478/cait-2018-0012 | Journal eISSN: 1314-4081 | Journal ISSN: 1311-9702
Language: English
Page range: 139 - 151
Submitted on: Jul 1, 2017
Accepted on: Oct 23, 2017
Published on: Mar 30, 2018
Published by: Bulgarian Academy of Sciences, Institute of Information and Communication Technologies
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2018 Alexander Popov, published by Bulgarian Academy of Sciences, Institute of Information and Communication Technologies
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.