Portfolio item number 1
Published:
Short description of portfolio item number 1
Published:
Short description of portfolio item number 1
Published:
Short description of portfolio item number 2
Published:
Structure-aware extractive summarization for scientific papers using heterogeneous graph neural networks
Recommended citation: S Qi, L Li, Y Li, J Jiang, D Hu, Y Li, Y Zhu, Y Zhou, M Litvak, N Vanetik. (2022). "SAPGraph: Structure-aware extractive summarization for scientific papers with heterogeneous graph." Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics.
Download Paper
Published:
A curated list of evaluators designed to assess model hallucination in language models
Recommended citation: S Qi. (2024). "Awesome-Hallu-Eval: A Comprehensive Collection of Hallucination Evaluation Methods." GitHub Repository.
Download Paper
Published:
Data and code for evaluating LLMs assessment of mixed-context hallucination through summarization
Recommended citation: S Qi, R Cao, Y He, Z Yuan. (2025). "Evaluating LLMs Assessment of Mixed-Context Hallucination Through the Lens of Summarization." arXiv preprint arXiv:2503.01670.
Download Paper
Published in Proceedings of the First Workshop on Scholarly Document Processing, 2020
Automatic scientific document summarization for CL-SciSumm 2020 and LongSumm 2020 shared tasks.
Recommended citation: L Li, Y Xie, W Liu, Y Liu, Y Jiang, S Qi, X Li. (2020). "CIST@CL-SciSumm 2020, LongSumm 2020: Automatic scientific document summarization." Proceedings of the First Workshop on Scholarly Document Processing. 225-234.
Download Paper
Published in arXiv preprint, 2021
Investigating subjective bias in abstractive summarization and its impact on summary quality.
Recommended citation: L Li, W Liu, M Litvak, N Vanetik, J Pei, Y Liu, S Qi. (2021). "Subjective bias in abstractive summarization." arXiv preprint arXiv:2106.10084.
Download Paper
Published in Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, 2022
Structure-aware extractive summarization for scientific papers using heterogeneous graph neural networks.
Recommended citation: S Qi, L Li, Y Li, J Jiang, D Hu, Y Li, Y Zhu, Y Zhou, M Litvak, N Vanetik. (2022). "SAPGraph: Structure-aware extractive summarization for scientific papers with heterogeneous graph." Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics.
Download Paper
Published in arXiv preprint, 2024
Comprehensive survey of automatic hallucination evaluation methods in natural language generation.
Recommended citation: S Qi, L Gui, Y He, Z Yuan. (2024). "A Survey of Automatic Hallucination Evaluation on Natural Language Generation." arXiv preprint arXiv:2404.12041.
Download Paper
Published in arXiv preprint, 2025
Information-theoretic analysis of L1-dependent biases in LLM simulation of L2-English dialogue.
Recommended citation: R Gao, X Wu, T Kuribayashi, M Ye, S Qi, C Roever, Y Liu, Z Yuan, JH Lau. (2025). "Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases." arXiv preprint arXiv:2502.14507.
Download Paper
Published in arXiv preprint, 2025
Improving LLMs theory-of-mind reasoning capabilities using neural knowledge base of entity states.
Recommended citation: H Xu, S Qi, J Li, Y Zhou, J Du, C Catmur, Y He. (2025). "EnigmaToM: Improve LLMs Theory-of-Mind Reasoning Capabilities with Neural Knowledge Base of Entity States." arXiv preprint arXiv:2503.03340.
Download Paper
Published in arXiv preprint, 2025
Evaluation of LLMs assessment capabilities for mixed-context hallucination in summarization tasks.
Recommended citation: S Qi, R Cao, Y He, Z Yuan. (2025). "Evaluating LLMs Assessment of Mixed-Context Hallucination Through the Lens of Summarization." arXiv preprint arXiv:2503.01670.
Download Paper
Published in arXiv preprint, 2025
Incentive training for language models using verifier-free reinforcement learning approach.
Recommended citation: W Liu, S Qi, X Wang, C Qian, Y Du, Y He. (2025). "NOVER: Incentive Training for Language Models via Verifier-Free Reinforcement Learning." arXiv preprint arXiv:2505.16022.
Download Paper
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.