Hongbin Liu
{first_name}.{last_name}@duke.edu

Hi! I am a Ph.D. student at Duke University, where I am fortunate to be supervised by Prof. Neil Gong. My research focuses on security and safety of generative AI, especially multimodal large language models. Priorly, I earned a B.Eng. from the University of Science and Technology of China in 2020. I worked with Prof. Tong Zhang at HKUST in Summer 2019.

profile photo
Preprints (*Equal contributions)

2024

  • Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
    Zedian Shao*, Hongbin Liu*, Jaden Mu, Neil Zhenqiang Gong.

  • Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
    Zhongye Liu*, Hongbin Liu*, Yuepeng Hu, Zedian Shao, Neil Zhenqiang Gong.

  • Can DeepFake Speech be Reliably Detected?
    Hongbin Liu*, Youzheng Chen, Arun Narayanan, Athula Balachandran, Pedro J. Moreno, Lun Wang*.

  • Refusing Safe Prompts for Multi-modal Large Language Models
    Zedian Shao*, Hongbin Liu*, Yuepeng Hu, Neil Zhenqiang Gong.

  • Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning
    Yuqi Jia, Minghong Fang, Hongbin Liu, Jinghuai Zhang, Neil Zhenqiang Gong.

  • Publications (*Equal contributions)

    2024

    2023

    2022

    2021

    Experiences

    May 2024 - Aug. 2024: Research Intern, Google AI.

    June 2023 - Jan. 2024: Student Researcher, Google AI.

    Jan. 2021 - Apr. 2021: Research Intern, Alibaba Group.

    Jul. 2019 - Sep. 2019: Research Intern, HKUST.

    Academic Activities

    Conference Reviewer: ECCV 2024, ACCV 2024, ICCV 2023.

    Journal Reviewer: T-IFS 2023.