Training Deep Neural Networks with Minimal Supervision: The case of Language Understanding and Generation
presentationposted on 2018-03-29, 02:27 authored by Gholamreza Haffari
Language Technology is being revolutionised by deep neural networks. In this talk, I cover some of our recent work in multilingual text understanding and generation. More specifically, we will revisit deep encoder-decoder architectures for translation, whereby the encoder 'reads' text in a source language, and the decoder 'generates' its translation in the target language. Although these deep neural architectures are powerful tools for modality transformation, they are notorious for their data hungriness. In this talk, I will describe our efforts in addressing the challenges of learning these architectures in scarce data scenarios.