Human languages are context-dependent by nature. There are many context-related system errors in current natural language understanding and generation models. This research considers contextual information and studies various context-dependent tasks to fill this gap. We first investigate the role of context in subword segmentation for neural machine translation. Then we explore context-dependent natural language understanding in dialogue systems. Finally, we denote the format, wording, and genre of the input sentences as meta context. We utilize the meta context to synthesize unlabeled in-domain data to advance text classification tasks under various settings.