Lifelong Representation Learning for NLP Applications release_gkpgispoh5b5rpba3essx5lebq

by Hu Xu

Published by University of Illinois at Chicago.

2020  

Abstract

Representation learning lives at the heart of deep learning for natural language processing (NLP). Traditional representation learning (such as softmax-based classification, pre-trained word embeddings, and language models, graph representations) focuses on learning general or static representations with the hope to help any end task. As the world keeps evolving, emerging knowledge (such as new tasks, domains, entities or relations) typically come with a small amount of data with shifted distributions that challenge the existing representations to be effective. As a result, how to effectively learn representations for new knowledge becomes crucial. Lifelong learning is a machine learning paradigm that aims to build an AI agent that keeps learning from the evolving world, like humans' learning from the world. This dissertation focuses on improving representations on different types of new knowledge (classification, word-level, contextual-level, and knowledge graph) for a myriad of NLP end tasks, ranging from text classification, sentiment analysis, entity recognition, question answering to the more complex dialog system. With the help of lifelong representation learning, models' performance on tasks is greatly improved beyond existing general representation learning.
In text/plain format

Archived Files and Locations

application/pdf  3.7 MB
file_ytloibh56vcjblpslzvz3uat3a
s3-eu-west-1.amazonaws.com (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2020-12-22
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d0537cbb-345c-44eb-9231-c925a5329fa0
API URL: JSON