Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge

Angus Addlesee*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)
27 Downloads (Pure)

Fingerprint

Dive into the research topics of 'Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge'. Together they form a unique fingerprint.

INIS

Computer Science