Abstract
Existing research has demonstrated that advanced Large Language Models (LLMs), such as GPT-4, Falcon-40B, and LLaMA2, can support Ontology Population (OP) by extracting and integrating assertional axioms from text. However, their tendency to hallucinate undermines trust in high-precision applications, causing ontology developers to hesitate before adopting LLM-based methods. In this study, we first surveyed existing OP experiments to select the most accurate model, identifying GPT as the leading candidate. We then analysed where and why hallucinations occurred during OP and observed that they predominantly arose when the model lacked clear guidance on the ontology's structure. To mitigate this, we devised a prompting strategy grounded in ontology design patterns, explicitly conveying schema constraints to the LLM. Experimental results on a real-world use case demonstrate that our pattern-based prompts significantly reduce hallucinations and yield more accurate axiom extraction compared to conventional prompts. These findings indicate that leveraging ontology design patterns in LLM prompts substantially enhances the reliability of automated OP workflows.
| Original language | English |
|---|---|
| Pages (from-to) | 1438-1447 |
| Number of pages | 10 |
| Journal | Procedia Computer Science |
| Volume | 270 |
| Early online date | 6 Nov 2025 |
| DOIs | |
| Publication status | Published - 2025 |
| Event | 29th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems 2025 - Osaka, Japan Duration: 10 Sept 2025 → 12 Sept 2025 |
Keywords
- Agriculture
- Design Patterns
- Few-shot Prompting Strategy
- GPT
- Knowledge Base
- Ontology Population
ASJC Scopus subject areas
- General Computer Science