Open Source Projects

Here are some of the projects that may be useful to you:

Awesome-Hallu-Eval

A Comprehensive Collection of Hallucination Evaluation Methods

This is a curated list of evaluators designed to assess model hallucination. Here, you can easily find the right tools you need to evaluate and analyze hallucination behavior in language models.

Key Features:

Research Areas Covered:

Impact:

FHSumBench

Evaluating LLMs’ Assessment of Mixed-Context Hallucination Through the Lens of Summarization

This project provides the data and code for our research on evaluating how large language models assess mixed-context hallucination through summarization tasks.

Research Focus:

Key Contributions:

Technical Approach:


For more details about any specific project, feel free to contact me at siya.qi@kcl.ac.uk