Goal-driven text descriptions for images

Abstract

A big part of achieving Artificial General Intelligence(AGI) is to build a machine that can see and listen like humans. Much work has focused on designing models for image classification, video classification, object detection, pose estimation, speech recognition, etc., and has achieved significant progress in recent years thanks to deep learning. However, understanding the world is not enough. An AI agent also needs to know how to talk, especially how to communicate with a human. While perception (vision, for example) is more common across animal species, the use of complicated language is unique to humans and is one of the most important aspects of intelligence. In this thesis, we focus on generating textual output given visual input. This involves both visual perception and language generation. Nevertheless, we will use existing visual perception models, and focus primarily on how to generate more meaningful texts, by studying different goals of language.

Type
Publication
Thesis
Ruotian Luo
Ruotian Luo
Software Engineer at Waymo Perception

My research interests include computer vision, natural language processing, artificial intelligence.