Preprints (*Equal contributions)
2024
Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
Zedian Shao*, Hongbin Liu*, Jaden Mu, Neil Zhenqiang Gong.
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
Zhongye Liu*, Hongbin Liu*, Yuepeng Hu, Zedian Shao, Neil Zhenqiang Gong.
Can DeepFake Speech be Reliably Detected?
Hongbin Liu*, Youzheng Chen, Arun Narayanan, Athula Balachandran, Pedro J. Moreno, Lun Wang*.
Refusing Safe Prompts for Multi-modal Large Language Models
Zedian Shao*, Hongbin Liu*, Yuepeng Hu, Neil Zhenqiang Gong.
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning
Yuqi Jia, Minghong Fang, Hongbin Liu, Jinghuai Zhang, Neil Zhenqiang Gong.
|
Publications (*Equal contributions)
2024
-
AudioMarkBench: Benchmarking Robustness of Audio Watermarking
Hongbin Liu*, Moyang Guo*, Zhengyuan Jiang, Lun Wang, Neil Zhenqiang Gong.
Datasets and Benchmarks Track in Conference on Neural Information Processing Systems (NeurIPS).
-
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu, Michael K Reiter, Neil Zhenqiang Gong.
USENIX Security Symposium.
-
Visual Hallucinations of Multi-modal Large Language Models
Wen Huang*, Hongbin Liu*, Minxin Guo, Neil Zhenqiang Gong.
Findings of the Association for Computational Linguistics (ACL)
-
Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models
Hongbin Liu, Lun Wang, Om Thakkar, Abhradeep Guha Thakurta, and Arun Narayanan.
DLSP @ IEEE Symposium on Security and Privacy.
-
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu*, Wenjie Qu*, Jinyuan Jia, Neil Zhenqiang Gong.
Security Architectures for Generative Artificial Intelligence (SAGAI) @ IEEE Symposium on Security and Privacy.
-
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
-
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong.
Book Chapter @ AI Embedded Assurance for Cyber Systems.
2023
2022
-
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong.
USENIX Security Symposium.
-
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, and Yang Zhang.
European Conference on Computer Vision (ECCV).
-
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong.
ACM Conference on Computer and Communications Security (CCS).
-
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu and Neil Zhenqiang Gong.
International Conference on Learning Representations (ICLR).
2021
|
Experiences
May 2024 - Aug. 2024: Research Intern, Google AI.
June 2023 - Jan. 2024: Student Researcher, Google AI.
Jan. 2021 - Apr. 2021: Research Intern, Alibaba Group.
Jul. 2019 - Sep. 2019: Research Intern, HKUST.
|
Academic Activities
Conference Reviewer: ECCV 2024, ACCV 2024, ICCV 2023.
Journal Reviewer: T-IFS 2023.
|
|