
Email: tharis at bu.edu
I am a third-year PhD student at Boston University, jointly advised by ***Krzysztof Onak*** and Venkatesh Saligrama. I did my undergrad at Dartmouth College advised by Amit Chakrabarti, and spent two years in Seattle as a software engineer at the Microsoft Windows Base Kernel team.
In 2025, I completed an internship at the National Institute of Informatics in Tokyo, hosted by Yuichi Yoshida.
I am currently a student researcher at Google Research, kindly hosted by **Maryam Karimzadehgan.**
My research mainly focuses on developing efficient and robust algorithms for Foundation Models. I also am often interested on problems from different areas of theoretical computer science, including sublinear algorithms, learning theory and complexity.
My CV
🎓 Google Scholar
Publications
Foundation Models
- Compression Barriers for Autoregressive Transformers: TH, Onak (COLT 2025)
- $k$NN Attention Demystified: A Theoretical Exploration for Scalable Transformers: TH (ICLR 2025)
Sublinear Algorithms
- Estimating Hitting Times Locally At Scale: TH, Spaeh, Dragazis, Tsourakakis (NeurIPS 2025)
- Efficient Algorithms for Adversarially Robust Approximate Nearest Neighbor Search: Andoni, TH, Kelman, Onak (NeurIPS 2025 Workshop: Reliable ML from Unreliable Data)
- Counting Simplices in Hypergraph Streams: Chakrabarti, TH (ESA 2022)
Others
- Teaching American Sign Language in Mixed Reality: Shao, Sniffen, Blanchet, Hillis, Shi, H, Liu, Lamberton, Malzkuhn, Quandt, Mahoney, Kraemer, Zhou, Balcom (Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies)
Talks
- On Language Generation in the Limit [Oral Exam, BU]
- On $k$-NN Methods for Fast Attention Computation [Boston University AI Seminar]
- On Adversarially Robust Algorithms for Searching [Boston University Theory Seminar]
- On Space Lower Bounds for Transformers, National Institute of Informatics [Tokyo, Japan]
- On Space Lower Bounds for Transformers. [COLT, Lyon]