Advancement: Li, C. (CSE) - Control the Factuality for Large Language Models
Large language models (LLMs) have shown superior performance over several NLP tasks. However, they still suffer from generating factual incorrect content. We hypothesize this is due to the process of their training. The objective of the LLM training is to mimic human languages without controlling the generation of facts in the context. To solve the above problem, this proposal proposes to utilize three methods: 1. Factual Aware Contextualized Training (FACT); 2. Retrieve Augmented In-context Learning (RAIL); 3. Retrieve Augmented In-context Verification (RAIV). Factual Aware Contextualized Training (FACT) adds factual context to the pre-training data for LLM training to constrain the use of factual data after adding the same context during generation. Retrieve Augmented In-context Learning leverages the in-context learning ability of LLMs to include the retrieved related facts from training data into the input context during generation to constrain the generation of the factual content. Retrieve Augmented In-context Verification conducts the retrieved verification and modification after the generation to ensure the factuality of the generated content. This proposal also contains several completed works related to language models, including future language modeling and task contamination in few-shot learning for LLMs.
Event Host: Changmao Li, Ph.D. Student, Computer Science and Engineering
Advisor: Jeffrey Flanigan
Thursday, December 7, 2023 at 12:00pm
Jack Baskin Engineering, 330
Baskin Engineering 1156 High Street, Santa Cruz, California 95064