My name is Sreyan Ghosh. I am currently an M.S. C.S. student at the University of Maryland, College Park. Previous to this I served as a Deep Learning Solutions Architect at Nvidia, Bangalore. My primary work at Nvidia involved building and delivering deep learning based NLP solutions to Nvidia’s customers and partners. Previously to that, I served as a Software Engineer II at Cisco Systems, Bangalore. My primary work at Cisco involved building network assurance systems for Cisco’s Service Provider customers. Beyond Software Engineering, I am interested in Speech and Language Processing. At UMCP, I work under the guidance of Prof. Dinesh Manocha at Gamma Lab at UMCP on audio-visual speech enhancement.
Previously I have been fortunate to have worked with Prof. S. Umesh at Speech Lab @ Indian Institute of Technology Madras on making self-supervised learning in speech and audio more amenable to resource-constrained scenarios (both data and compute). I have also worked with Prof. Rajiv Ratn Shah at MIDAS Labs @ IIIT Delhi on content moderation, comlex named entity recognition and speech recognition systems for low-resource Indian languages and Indian-accented English.
I graduated with a Bachelor’s in Computer Science and Engineering at Christ University in 2020. During my undergraduate studies, I served as the Vice President and co-founder of Neuron, Christ University’s first AI group focused on research and hackathons. During my undergraduation, I have won over 20 national and international hackathons.
I maintain a list of my publications and research implementations under the Research tab. I also blog about my personal experiences and on topics related to speech and text processing. I am always open to collaborations and please feel free to drop a mail!
|July 2022:||4 papers submitted to IEEE SLT 2022! Pre-print now available under research section! Codes to be made available soon!|
|July 2022:||Started contributing to GSoC 2022 for the Keras Organization. More details about my project can be found in the Projects section!|
|March 2022:||7 papers submitted to Interspeech 2022! Pre-print now available under research section! Codes to be made available soon!|
|December 2021:||Paper on Low-Resource Audio Representation Learning accepted to AAAI 2022 SAS Workshop! Pre-print now available under research section!|