Modern approach to NLP using Recurrent Neural Networks

Speaker image

Siddarth Venkatraman

ICT Seminar Hall 2

Oct 28, 2018

03:45 PM, 30 minutes

Talk description (Track Two)

The talk will start of with a brief overview of the main problem statements in the field of Natural Language processing. Then, the data representation of language which can be usable by ML models is discussed. Word embeddings, and how to build them or use pretrained embeddings in Python. Moving on to Neural Networks and briefly talk about CNN models for NLP before diving into RNN's. LSTM and related memory cells will be briefly discussed, along with their implementation in Keras. Finally, we will go over the state of the art use of Reinforcement Learning for NLP which allows us to use non differentiable language metrics.

About the speaker

Siddarth Venkatraman is a second year CSE student. In 2017, he joined Project Manas(the autonomous car student project), and since then AI has been what he has devoted most of my time towards. In the past year and a half, Siddarth has worked on various projects relating to ML. Some of these are road lane detection, object segmentation, image reconstruction and classification, sentiment analysis as well as small models made for various Kaggle competitions. He has also ventured briefly into research on ensemble deep learning, RL safety, and am currently working with faculty in MIT to build an automated Hypertension Retinopathy model. Siddarth dedicates his time to ML simply because the very nature of the field makes it intensely satisfying and exciting.