קולוקוויום בביה"ס למדעי המחשב - Understanding Deep Learning for Natural Language Processing

Omer Levy

22 באוקטובר 2017, 11:00 
בניין שרייבר, חדר 006 
קולוקוויום במדעי המחשב

Abstract:

Deep learning is revolutionizing natural language processing (NLP), with innovations such as word embeddings and long short-term memory (LSTM) playing a key role in virtually every state-of-the-art NLP system today. However, what these neural components learn in practice is somewhat of a mystery. This talk dives into the inner workings of word embeddings and LSTMs, in an attempt to gain a better mathematical and linguistic understanding of what they do, how they do it, and why it works.

 

Bio:

I am a post-doc in the Department of Computer Science & Engineering at the University of Washington, working with Prof. Luke Zettlemoyer. Previously, I completed my PhD at Bar-Ilan University with the guidance of Prof. Ido Dagan and Dr. Yoav Goldberg. I am interested in designing algorithms that mimic the basic language abilities of humans, and using them to realize semantic applications such as question answering and summarization that help people cope with information overload. I am also interested in deepening our qualitative understanding of how machine learning is applied to language and why it succeeds (or fails), in hope that better understanding will foster better methods.

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>