Research
My research focuses on tackling real-world large-scale multi-modal data distributions while maximizing adaptation and generalization. I also develop label-efficient learning algorithms which reduce human effort while facilitate transfer of information through self-supervised and semi-supervised models.
|
Interested in interning at Google?
I am actively looking for motivated PhD student interns/researchers to do research in areas such as adaptation and factuality in multimodal and/or large language models. Feel free to contact me if you are interested (starting time is flexible).
|
|
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Zifeng Wang,
Zizhao Zhang,
Sayna Ebrahimi,
Ruoxi Sun,
Han Zhang,
Chen-Yu Lee,
Xiaoqi Ren,
Guolong Su,
Vincent Perot,
Jennifer Dy,
Tomas Pfister,
European Conference on Computer Vision (ECCV 2022)
[Code]
A rehearsal-free approach for using prompts in continual learning.
|
|
Self-Supervised Pretraining Improves Self-Supervised Pretraining
Colorado Reed*,
Xiangyu Yue*,
Ani Nrusimha,
Sayna Ebrahimi,
Vivek Vijaykumar,
Richard Mao,
Bo Li,
Shanghang Zhang,
Devin Guillory,
Sean Metzger,
Kurt Keutzer,
Trevor Darrell
Winter Conference on Applications of Computer Vision (WACV 2022)
[Code]
Hierchical self-supervised pretraining improves pretraining itself!
|
|
Continual Learning with Neural Networks
Sayna Ebrahimi; Spring 2020
[Computer Science]
Mechanical Behavior of Materials at Multiscale: Peridynamic Theory and Learning-based Approaches
Sayna Ebrahimi; Spring 2020
[Mechcanical Engineering]
|
|