Sayna Ebrahimi

I am a research scientist at Google Cloud AI Research. I completed my PhD at the loveliest of universities, UC Berkeley's Artificial Intelligence Research (BAIR) lab where I was advised by Trevor Darrell and David Steigmann. During my PhD, I spent time at Facebook AI Research and NVIDIA as a research intern. Outside work, you can find me lifting weights or playing with my ever-enthusiastic dog, Pashmak.

Email  /  Bio  /  Google Scholar  /  Github  /  Twitter  /  Facebook

profile photo
Research

My research focuses on tackling real-world large-scale multi-modal data distributions while maximizing adaptation and generalization. During my PhD, I focused on active learning, continual learning, and test time adaptation.

News

  • I am serving as an Area Chair for ECCV 2024.
  • I served as a panalist in virtual ContinualAI Uncofernece in Oct. 2023! .
  • Our paper on selective prediction for LLMs is accepted at EMNLP 2023. Google blog post is here
  • Our paper on test time adaptation to address spurious correleations is accepted at NeurIPS 2023. Awesome collaboration with Qingyao Sun, Kevin Murphy, and Alexander D'Amour.
  • New paper on introducing a new architecture and pretraining strategies for multimodal learning with unstructured (vision and language) and structured (tabular and time series) data!
  • show more
  • Publications

    TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents
    James Enouen, Hootan Nakhost, Sayna Ebrahimi, Sercan O. Arik, Yan Liu, Tomas Pfister
    Preprint

    A post-hoc XAI method for LLMs which extends Shapley value type methods to the scale of LLMs and long documents.

    LANISTR: Multimodal Learning from Structured and Unstructured Data
    Sayna Ebrahimi, Sercan O. Arik, Yihe Dong, Tomas Pfister
    Preprint

    A novel architecture and pretraining strategies to learn from image, text, and time series/tabular data!

    ASPEST: Bridging the Gap Between Active Learning and Selective Prediction
    Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O. Arik, Somesh Jha, Tomas Pfister
    Transactions on Machine Learning Research (TMLR, 2024)
    [Code]

    A novel method to combine active learing and selective prediction.

    Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
    Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O. Arik, Tomas Pfister, Somesh Jha
    Conference on Empirical Methods in Natural Language Processing (Findings of EMNLP 2023) [Google Blog]

    LLMs can adapt themselves to have better selective prediction performance, that is to abstain from generating an answer when in doubt!

    Beyond Invariance: Test-Time Label-Shift Adaptation for Distributions with Spurious Correlations
    Qingyao Sun, Kevin Murphy, Sayna Ebrahimi, Alexander D'Amour
    Conference on Neural Information Processing SystemsPS (NeurIPS 2023) [Code]

    A novel test-time (source data-free) adaptation mechanism for label shift using spurious correlations.

    Test-Time Adaptation for Visual Document Understanding
    Sayna Ebrahimi, Sercan O. Arik, Tomas Pfister
    Transactions on Machine Learning Research (TMLR, 2023)
    [Project page][Download benchmarks]

    A novel test-time (source data-free) adaptation mechanism for a multimodal task (document understanding).

    DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
    Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister,
    European Conference on Computer Vision (ECCV 2022)
    [Code]

    A rehearsal-free approach for using prompts in continual learning.

    Contrastive Test-Time Adaptation
    Dian Chen, Dequan Wang, Trevor Darrell, Sayna Ebrahimi,
    Computer Vision and Pattern Recognition Conference (CVPR 2022)
    [Project page] [Code]

    Test-time adaptation using contrstive learning and self training!

    Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image
    Shizhan Zhu, Sayna Ebrahimi, Angjoo Kanazawa Trevor Darrell,
    International Conference on Representation Learning (ICLR 2022)
    [Project page] [Code]

    Implicit 3D recontruction using a novel diffretiable gradient sampling.

    Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Colorado Reed*, Xiangyu Yue*, Ani Nrusimha, Sayna Ebrahimi, Vivek Vijaykumar, Richard Mao, Bo Li, Shanghang Zhang, Devin Guillory, Sean Metzger, Kurt Keutzer, Trevor Darrell
    Winter Conference on Applications of Computer Vision (WACV 2022)
    [Code]

    Hierchical self-supervised pretraining improves pretraining itself!

    On-target Adaptation
    Dequan Wang, Shaoteng Liu, Sayna Ebrahimi, Evan Shelhamer, Trevor Darrell
    (arXiv 2021)

    Improving performance on unseen data using InfoMax loss, self-training, and knowledge ditillation across domains.

    Region-level Active Learning for Cluttered Scenes
    Michael Laielli, Giscard Biamby, Dian Chen, Adam Loeffler, Phat Dat Nguyen, Ross Luo, Trevor Darrell, Sayna Ebrahimi
    (arXiv 2021)

    Novel generalized active learning framework for object detection at region level on real-world noisy imbalanced cluttered datasets

    Predicting with Confidence on Unseen Distributions
    Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, Ludwig Schmidt
    International Conference on Computer Vision (ICCV 2021)

    Predicting accuracy on unseen data distributions!

    Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
    Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E. Gonzalez, Marcus Rohrbach, Trevor Darrell
    [Code]
    International Conference on Representation Learning (ICLR 2021)

    A hybrid continual learning algorithm that mitigates forgetting by using experience replay and explanation replay!

    Minimax Active Learning
    Sayna Ebrahimi, William Gan, Dian Chen, Giscard Biamby, Kamyar Salahi, Michael Laielli, Shizhan Zhu, Trevor Darrell
    [Project Page][arXiv 2020]

    A semisupervisd active learning algorithm using minimax entropy that selects samples based on their diversity and uncertainty.

    Adversarial Continual Learning
    Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, Marcus Rohrbach
    European Conference on Computer Vision (ECCV 2020)
    [Paper] [Code ] [Long Video] [Short Video] [Slides]

    A hybrid continual learning algorithm that mitigates forgetting by using architecture growth and memory replay!

    Uncertainty-Guided Continual Learning in Bayesian Neural Networks
    Sayna Ebrahimi , Mohamed Elhoseiny, Trevor Darrell, Marcus Rohrbach
    International Conference on Learning Representations (ICLR 2020).
    [Paper] [Code] [Talk Video] [Project Page]

    A regularization-based continual learning algorithm that mitigates forgetting in Bayesian neural networks using uncertainty!

    Compositional GAN: Learning Conditional Image Composition
    Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell
    International Journal of Computer Vision (IJCV 2020)
    [Paper] [Code]

    Genrating realistic images by composing pair of objects from distinct distributions!

    Variational Adversarial Active Learning
    Sayna Ebrahimi*, Samarth Sinha*, Trevor Darrell
    *denotes equal contribution.
    Internation Conference on Compute Vision (ICCV 2019) (Oral)
    [Paper] [Code] [Talk Video] [Poster] [Project page]

    A novel task-agnostic active learning strategy that uses unsupervised learning (image reconstruction) to select samples.

    Generalized Zero and Few-Shot Learning via Aligned Variational Autoencoders
    Edgar Schonfeld, Sayna Ebrahimi , Samarth Sinha, Trevor Darrell, Zeynep Akata
    Computer Vision and Pattern Recognition Conference (CVPR 2019)
    [Paper] [Code]

    A novel few/zero shot learning algorithm that uses different modalities.

    Gradient-free Policy Architecture Search and Adaptation
    Sayna Ebrahimi, Anna Rohrbach, Trevor Darrell
    Conference on Robot Learning (CoRL 2017) (Spotlight)
    [Paper] [Project page] [Bibtex]

    Neural architecture search using gradient-free optimization to adapt from supervised learning to reward-based learning.





    Theses


    Continual Learning with Neural Networks
    Sayna Ebrahimi; Spring 2020
    [Computer Science]

    Mechanical Behavior of Materials at Multiscale: Peridynamic Theory and Learning-based Approaches
    Sayna Ebrahimi; Spring 2020
    [Mechcanical Engineering]

    (this guy makes a nice wesbite)