Qingqing Cao

Research Scientist •  Apple SIML.

prof_pic.jpg

I am a research scientist at Apple System Intelligence and Machine Learning (SIML) group. My current focus is to develop high quality ML systems and applications that are optimized for scalability and efficiency. In the past, I built efficient and practical NLP systems for both edge devices and the cloud, such as on-device (visual) question answering and faster Transformer models.

Previously, I was a postdoc in the UW NLP group at the University of Washington where I won the postdoc research award twice. I hold a Ph.D. degree in computer science at Stony Brook University. I was a recipient of the Catacosinos Fellowship at Stony Brook University and a Rising Star in Data Science at the University of Chicago.

News

May 28, 2025 Thanks ICML 2025 for recognizing me as a top reviewer (1.9%)!
May 02, 2025 CtrlSynth got accepted to ICML 2025 as a poster paper!
May 01, 2025 Glad to serve as Senior Area Chair for EMNLP 2025 !
Apr 11, 2025 Glad to serve as Area Chair for NeurIPS 2025 !
Nov 02, 2024 Thanks NeurIPS for recognizing me as a top reviewer (8.6%)!

Recent publications

2024

  1. # CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning
    Qingqing CaoMahyar Najibi, and Sachin Mehta
    Oct 2024
  2. # KV Prediction for Improved Time to First Token
    Maxwell HortonQingqing CaoChenfan SunYanzi JinSachin MehtaMohammad Rastegari, and Moin Nabi
    Oct 2024
  3. arxiv
    # Efficient Vision-Language Models by Summarizing Visual Tokens into Compact Registers
    Yuxin Wen, Qingqing Cao, Qichen Fu, Sachin Mehta, and Mahyar Najibi
    Oct 2024
  4. # OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
    Apr 2024
  5. # APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
    Bowen ZhaoHannaneh Hajishirzi, and Qingqing Cao
    Jan 2024

2023

  1. # BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models
    Qingqing CaoSewon MinYizhong Wang, and Hannaneh Hajishirzi
    In , Oct 2023