|
Research
My research is on large-scale multi-modal generation and understanding. I have been a core contributor of Gemini 2.5 and Veo 2.0 and 3.0. During my PhD, I focused on efficient learning and domain adaptation in computer vision.
|
|
News
Our paper on multilingual scaling laws is accepted at ICLR 2026. Congratulations Shayne!
I served as an Area Chair for CVPR 2026.
Veo 3.0 is out! I was a core model contributor.
Gemini 2.5 technical report is out! I was a core model contributor in its image generation capability.
show more
|
|
|
ATLAS: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality
Shayne Longpre,
Sneha Kudugunta,
Niklas Muennighoff,
I-Hung Hsu,
Isaac Caswell,
Alex Pentland,
Sercan O. Arik,
Chen-Yu Lee,
Sayna Ebrahimi
International Conference on Representation Learning (ICLR 2026)
We introduce the Adaptive Transfer Scaling Law (ATLAS) for multilingual pretraining.
|
|
|
Unique Lives, Shared World: Learning from Single-Life Videos
Tengda Han*,
Sayna Ebrahimi*,
Dilara Gokay,
Li Yang Ku,
Maks Ovsjanikov,
Iva Babukova,
Daniel Zoran,
Viorica Patraucean,
Joao Carreira,
Andrew Zisserman,
Dima Damen
arXiv 2025
*denotes equal contribution.
We introduce the 'single-life' learning paradigm, where we train a distinct model exclusively on egocentric videos captured by one person.
|
|
|
Reverse thinking makes LLMs stronger reasoners
Justin Chih-Yao Chen,
Zifeng Wang,
Hamid Palangi,
Rujun Han,
Sayna Ebrahimi,
Long T. Le,
Vincent Perot,
Swaroop Mishra,
Mohit Bansal,
Chen-Yu Lee,
Tomas Pfister
North American Chapter of the Association for Computational Linguistics (NAACL 2025)
Reverse thinking improves LLM reasoning capabilities by enabling consistency checks between forward and backward thinking.
|
|
|
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Zifeng Wang,
Zizhao Zhang,
Sayna Ebrahimi,
Ruoxi Sun,
Han Zhang,
Chen-Yu Lee,
Xiaoqi Ren,
Guolong Su,
Vincent Perot,
Jennifer Dy,
Tomas Pfister,
European Conference on Computer Vision (ECCV 2022)
[Code]
A rehearsal-free approach for using prompts in continual learning.
|
|
|
Self-Supervised Pretraining Improves Self-Supervised Pretraining
Colorado Reed*,
Xiangyu Yue*,
Ani Nrusimha,
Sayna Ebrahimi,
Vivek Vijaykumar,
Richard Mao,
Bo Li,
Shanghang Zhang,
Devin Guillory,
Sean Metzger,
Kurt Keutzer,
Trevor Darrell
Winter Conference on Applications of Computer Vision (WACV 2022)
[Code]
Hierchical self-supervised pretraining improves pretraining itself!
|
|
Continual Learning with Neural Networks
Sayna Ebrahimi; Spring 2020
[Computer Science]
Mechanical Behavior of Materials at Multiscale: Peridynamic Theory and Learning-based Approaches
Sayna Ebrahimi; Spring 2020
[Mechcanical Engineering]
|
|