Hi! I am Xuan Gong (Chinese name: 龚宣; I also go by Xander), a final-year undergraduate majoring in Computer Science at Tongji University. I am excited to start my Ph.D. journey at Shanghai Jiao Tong University in Fall 2025.
My research interests mainly focus on VLM and LLM, especially on Model Knowledge, Memory and Factuality.
If you are interested in partnering on research projects, offering internship opportunities or exchange programs, I would be thrilled to connect with you.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Xuan Gong, Hanbo Huang, Shiyu Liang# (# corresponding author)
PreprintUnder review. 2025
We revisit how supervised fine-tuning affects factual knowledge in LLMs, revealing a factuality gap between known and unknown knowledge. This gap can be mitigated at inference via in-context learning (ICL) or out-of-distribution prompts. Our theoretical and empirical results show that test-time prompts can overshadow fine-tuning data, suggesting ICL can compensate for poor fine-tuning and should be considered in evaluating fine-tuning strategies.
Xuan Gong, Hanbo Huang, Shiyu Liang# (# corresponding author)
PreprintUnder review. 2025
We revisit how supervised fine-tuning affects factual knowledge in LLMs, revealing a factuality gap between known and unknown knowledge. This gap can be mitigated at inference via in-context learning (ICL) or out-of-distribution prompts. Our theoretical and empirical results show that test-time prompts can overshadow fine-tuning data, suggesting ICL can compensate for poor fine-tuning and should be considered in evaluating fine-tuning strategies.
Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei# (# corresponding author)
EMNLP 2024 Main
We propose DAMRO, a training-free method to reduce object hallucination in LVLMs by filtering misleading high-attention background tokens using the ViT CLS token. DAMRO significantly improves hallucination control on models like LLaVA and InstructBLIP across multiple benchmarks.
Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei# (# corresponding author)
EMNLP 2024 Main
We propose DAMRO, a training-free method to reduce object hallucination in LVLMs by filtering misleading high-attention background tokens using the ViT CLS token. DAMRO significantly improves hallucination control on models like LLaVA and InstructBLIP across multiple benchmarks.