Xuan Gong

Hi! I am Xuan Gong (Chinese name: 龚宣; I also go by Xander), a final-year undergraduate majoring in Computer Science at Tongji University. I am excited to start my Ph.D. journey at Shanghai Jiao Tong University in Fall 2025.

My research interests mainly focus on VLM and LLM, especially on Model Knowledge, Memory and Factuality.

If you are interested in partnering on research projects, offering internship opportunities or exchange programs, I would be thrilled to connect with you.


Education
  • Shanghai Jiao Tong University
    Shanghai Jiao Tong University
    School of Computer Science
    Ph.D. Candidate
    Sep. 2025 - present
  • Tongji University
    Tongji University
    B.S. in Computer Science (GPA rank top 5%)
    Sep. 2021 - Jul. 2025
Honors & Awards
  • National Scholarship of China
    2022
  • Outstanding Graduate of Tongji University
    2025
  • First Prize of The 14th National University Student Mathematics Competition Non-Mathematics Category (Shanghai Region)
    2022
  • 2023 CCF-BDCI competition, 8th place (1st of undergraduate teams) in B list of the track "Deployment of super resolution model based on tpu platform"
    2023
News
2024
One paper accpeted by EMNLP 2024 Main
Sep 20
Selected Publications (view all )
From Parameters to Prompts: Understanding and Mitigating the Factuality Gap between Fine-Tuned LLMs
From Parameters to Prompts: Understanding and Mitigating the Factuality Gap between Fine-Tuned LLMs

Xuan Gong, Hanbo Huang, Shiyu Liang# (# corresponding author)

PreprintUnder review. 2025

We revisit how supervised fine-tuning affects factual knowledge in LLMs, revealing a factuality gap between known and unknown knowledge. This gap can be mitigated at inference via in-context learning (ICL) or out-of-distribution prompts. Our theoretical and empirical results show that test-time prompts can overshadow fine-tuning data, suggesting ICL can compensate for poor fine-tuning and should be considered in evaluating fine-tuning strategies.

From Parameters to Prompts: Understanding and Mitigating the Factuality Gap between Fine-Tuned LLMs

Xuan Gong, Hanbo Huang, Shiyu Liang# (# corresponding author)

PreprintUnder review. 2025

We revisit how supervised fine-tuning affects factual knowledge in LLMs, revealing a factuality gap between known and unknown knowledge. This gap can be mitigated at inference via in-context learning (ICL) or out-of-distribution prompts. Our theoretical and empirical results show that test-time prompts can overshadow fine-tuning data, suggesting ICL can compensate for poor fine-tuning and should be considered in evaluating fine-tuning strategies.

DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination
DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination

Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei# (# corresponding author)

EMNLP 2024 Main

We propose DAMRO, a training-free method to reduce object hallucination in LVLMs by filtering misleading high-attention background tokens using the ViT CLS token. DAMRO significantly improves hallucination control on models like LLaVA and InstructBLIP across multiple benchmarks.

DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination

Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei# (# corresponding author)

EMNLP 2024 Main

We propose DAMRO, a training-free method to reduce object hallucination in LVLMs by filtering misleading high-attention background tokens using the ViT CLS token. DAMRO significantly improves hallucination control on models like LLaVA and InstructBLIP across multiple benchmarks.

All publications