Seminars

Feb
16
Tue
Richard Socher (MetaMind) “Multimodal Question Answering for Language and Vision” @ Hackerman Hall B17
Feb 16 @ 12:00 pm – 1:15 pm

Abstract

Deep Learning enabled tremendous breakthroughs in visual understanding and speech recognition. Ostensibly, this is not the case in natural language processing (NLP) and higher level reasoning.
However, it only appears that way because there are so many different tasks in NLP and no single one of them, by itself, captures the complexity of language understanding. In this talk, I introduce dynamic memory networks which are our attempt to solve a large variety of NLP and vision problems through the lense of question answering.

Biography

Richard Socher is the CEO and founder of MetaMind, a startup that seeks to improve artificial intelligence and make it widely accessible. He obtained his PhD from Stanford working on deep learning with Chris Manning and Andrew Ng and won the best Stanford CS PhD thesis award. He is interested in developing new AI models that perform well across multiple different tasks in natural language processing and computer vision.

He was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research PhD Fellowship in 2012 and a 2013 “Magic Grant” from the Brown Institute for Media Innovation and the 2014 GigaOM Structure Award.

Feb
23
Tue
KyungHyun Cho (New York University) “Future (?) of Machine Translation” @ Hackerman Hall B17
Feb 23 @ 12:00 pm – 1:15 pm

Abstract:

It is quite easy to believe that the recently proposed approach to machine translation, called neural machine translation, is simply yet another approach to statistical machine translation. This belief may drive research effort toward (incrementally) improving the existing neural machine translation system to outperform, or perform comparably to, the existing variants of phrase-based systems. In this talk, I aim to convince you otherwise. I argue that neural machine translation is not here to compete against the existing translation systems, but to open new opportunities in the field of machine translation. I will discuss three opportunities; (1) sub-word-level translation, (2) larger-context translation and (3) multilingual translation.
Biography:
Kyunghyun Cho is an assistant professor of Computer Science and Data Science at New York University (NYU). Previously, he was a postdoctoral researcher at the
University of Montreal under the supervision of Prof. Yoshua Bengio after obtaining a doctorate degree at Aalto University (Finland) in early 2014. Kyunghyun’s main research interests include neural networks, generative models and their applications, especially, to language understanding.

Center for Language and Speech Processing