Puja Trivedi
I am a CSE PhD candidate in the Graph Exploration and Mining at Scale (GEMS) Lab at the University of Michigan, where I am fortunate to be advised by Prof. Danai Koutra. I also often collaborate with Dr. Jay Thiagarajan at Lawrence Livermore National Laboratory.
I am broadly interested in understanding how self-supervised learning can be performed effectively and reliably for non-euclidean and graph data by incorporating domain invariances and designing grounded algorithms. My recent work has focused on understanding the role of data augmentations in graph contrastive learning.
Email  / 
CV  / 
Google Scholar
|
|
News
[06/2024] |
Completed thesis proposal. Defense planned for Fall 2024! Currently seeking full-time opportunities.
|
[01/2024] |
Our work on graph neural network uncertainty estimation was accepted at ICLR!
|
[12/2023] |
Our work on calibrating GNNs when performing link prediction was accepted at ICASSP!
|
[10/2023] |
Started interning with Amazon Search in Palo Alto!
|
|
|
Large Language Model Guided Graph Clustering
Puja Trivedi,
Nurendra Choudhary,
Eddie Huang,
Vasileios Ioannidis,
Karthik Subbian,
Danai Koutra
Preprint, 2024
bibtex / Paper
We introduce GCLR, an active-learning framework for improving gnn-based graph clustering with LLM guidance.
|
|
Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks
Puja Trivedi,
Mark Heimann,
Rushil Anirudh,
Danai Koutra,
Jay J. Thiagarajan
International Conference on Learning Representations (ICLR), 2024
bibtex / arXiv / Code / Project Page
We introduce G-ΔUQ, an accurate and scalable strategy for obtaining reliable uncertainty estimates for node classification and graph classification tasks.
|
|
A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias
Puja Trivedi,
Danai Koutra,
Jay J. Thiagarajan
International Conference on Learning Representations (ICLR), 2023 (Spotlight)
bibtex / arXiv / Code
We study how adaptation protocols can induce safe and effective generalization on downstream tasks through the lens of feature distortion and simplicity bias.
|
|
On the Efficacy of Generalization Error Prediction Scoring Functions
Puja Trivedi,
Danai Koutra,
Jay J. Thiagarajan
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
bibtex / arXiv / Code
We rigorously study the effectiveness of popular scoring functions under distribution shifts and corruptions
|
|
Analyzing Data-Centric Properties for Contrastive Learning on Graphs
Puja Trivedi,
Ekdeep Singh Lubana,
Mark Heimann,
Danai Koutra, and
Jay J. Thiagarajan
Advances in Neural Information Processing Systems (NeurIPS), 2022
bibtex / arXiv / Code
We provide a novel generalization analysis for graph contrastive learning with popularly used, generic graph augmentations. Our analysis identifies several limitations in current self-supervised graph learning practices.
|
|
Augmentations in Graph Contrastive Learning: Current Methodological Flaws & Towards Better Practices
Puja Trivedi,
Ekdeep Singh Lubana,
Yujun Yan,
Yaoqing Yang, and
Danai Koutra
ACM The Web Conference (formerly WWW), 2022
bibtex / arXiv / Code
We contextualize the performance of several unsupervised graph representation learning methods with respect to inductive bias of GNNs and show significant improvements by using structured augmentations defined by task-relevance.
|
Website Design from: here.
|
|