Interpretable Representation Learning for Speech Signals

Interpretable Representation Learning for Speech Signals
Share this

AAII Technical Lecture Series (ATLS-7)

Abstract

Representation learning is a branch of machine learning consisting of techniques that are capable of automatically discovering meaning representations from raw data for efficient information extraction. This talk will discuss about speech representations - what are those, why are they important to deep learning, how can we make representations and neural models interpretable and improve their performances.

Related links:

Speaker

Dr. Purvi Agrawal is an Applied Researcher-II at Microsoft India with the speech team in STCI AI and Cognition group. She did her PhD on “Neural Representation Learning for Speech and Audio Signals” with Dr. Sriram Ganapathy at Learning and Extraction of Acoustic Patterns (LEAP) lab, Dept. of Electrical Engineering, Indian Institute of Science (IISc), Bengaluru. She has also worked in Sony R&D Labs, Tokyo in 2017. Her research interests include interpretable deep learning, low-resource data modeling, raw waveform modeling, unsupervised/self-supervised learning, and biologically inspired deep learning.

Speaker: Purvi Agrawal (PhD IISc), Applied Researcher-II, Microsoft, India
Topic: Interpretable Representation Learning for Speech Signals
Date: 12 June 2021
Time: 5:00 PM (IST)