MS in Data Science and AI - IIT Madras
Email • GitHub • LinkedIn • Twitter
I research how large language models think - with a motive to contribute towards uncovering the In-Context Learning.
With a strong mathematical foundation, I focus on opening the "black box" of Transformers to understand their internal reasoning. Beyond theory, I love building robust, scalable AI systems that solve actual problems.
Currently pursuing my MS in Data Science and AI at IIT Madras, working on Transformer Interpretability under Prof. Harish Guruprasad.
- Top 1.5% in India: Secured AIR-543 (98.62%ile) in GATE Data Science & AI (2024).
- Academic Excellence: Recipient of the MMVY Scholarship and selected for High Value Assistantship at IIT Delhi.
- Research Focus: actively investigating theoretical foundations of In-Context Learning.
Core
Python, PyTorch, TensorFlow, NumPy, Pandas
AI & Deep Learning
Transformers, CNNs, LSTMs, HuggingFace, SpaCy, Scikit-Learn
Engineering & MLOps
FastAPI, Docker, Git/GitHub, Google Cloud Platform, VS Code
Multimodal AI Microservice:
Built a production-ready FastAPI system that integrates four distinct AI models: Named Entity Recognition (spaCy), Google Translation API, Speech Synthesis (gTTS), and Image Generation (Stability AI).
Key Tech: FastAPI, Cloud APIs, MLOps, Git Flow
Currently diving deep into MLOps to bridge the gap between research models and production systems, while continuing my core research on interpretable AI.