News

Apr 25, 2026 Paper accepted at ACL 2026 Demo

Our paper “RECAP: An End-to-End Platform for Capturing, Replaying, and Analyzing AI-Assisted Programming Interactions” has been accepted at ACL 2026 Demo! RECAP captures AI chat sessions and fine-grained code edits inside VS Code, merges them into a unified replayable timeline, and provides analysis modules for studying developer-AI interaction patterns. Excited...

Mar 15, 2026 Paper accepted at ACL 2026

Our paper “Believing without Seeing: Quality Scores for Contextualizing Vision-Language Model Explanations” has been accepted at ACL 2026! We propose quality scoring functions for VLM-generated explanations that help users better assess model reliability without viewing the visual context. Excited to present this work in San Diego!

Jan 28, 2026 Joining Adobe as AI/ML Intern

Excited to share that I will be joining Adobe as an AI/ML Intern this summer in San Jose!

Aug 25, 2025 Started MIIS at Carnegie Mellon University

Excited to begin my Masters of Science in Intelligent Information Systems at Carnegie Mellon University School of Computer Science! Looking forward to diving deeper into NLP research and collaborating with brilliant minds in the field. Here’s to a new chapter of learning and growth! 🚀📚

May 15, 2025 ELI-Why accepted at ACL Findings 2025

Our paper “ELI-Why: Evaluating the Pedagogical Utility of LLM Explanations” has been accepted at ACL Findings 2025! In this work, we introduced ELI-Why, a benchmark to assess the pedagogical capabilities of LLMs, and found that inference-time instructions alone are insufficient for LLMs to produce high-utility explanations tailored to users’ informational...

Jun 21, 2024 Silver Medal in Kaggle Competition

Thrilled to share that our team won a silver medal (top 3.4% globally) in the LLM-Prompt-Recovery Challenge on Kaggle! 🥈 The task involved recovering original user prompts from Gemma-generated completions. I led model finetuning focusing on a custom scoring strategy: a sharpened cosine similarity using sentence-t5-base. We used LoRA for...