Repository logo

Advancing Reliability of LLM-based Conversational Systems

dc.contributor.authorTaranukhin, Maksym
dc.contributor.copyright-releaseNot Applicable
dc.contributor.degreeDoctor of Philosophy
dc.contributor.departmentFaculty of Computer Science
dc.contributor.ethics-approvalNot Applicable
dc.contributor.external-examinerDiana Inkpen
dc.contributor.manuscriptsNot Applicable
dc.contributor.thesis-readerVered Shwartz
dc.contributor.thesis-readerFrank Rudzicz
dc.contributor.thesis-supervisorEvangelos E. Milios
dc.date.accessioned2025-08-25T18:15:34Z
dc.date.available2025-08-25T18:15:34Z
dc.date.defence2025-07-18
dc.date.issued2025-08-22
dc.description.abstractIn recent years, Large Language Models (LLMs) have made a significant leap forward, enabling more fluent and human-like interactions. However, LLM-based conversational systems continue to struggle with critical challenges that undermine their reliability and effectiveness, particularly in high-stakes domains. This thesis addresses some of the limitations by enhancing LLM-based conversational systems in four key areas: contextual and emotional understanding, proactive dialogue behaviour, complex reasoning, and uncertainty estimation. The central research problem is navigating nuanced conversational dynamics and producing accurate responses in sensitive applications. Therefore, the thesis poses two primary questions: (1) How can LLMs be improved to better manage dialogue context, initiative, reasoning, and safety when uncertain? (2) What is the measurable impact of these enhancements on user interaction and system performance? A mixed-methods approach combines empirical evaluation with computational modeling. Novel frameworks are introduced for stance detection, user input understanding, uncertainty estimation, and knowledge augmented reasoning. The proposed methods are implemented using a wide variety of techniques such as in-context learning, evidential networks and external knowledge integration. The results indicate significant gains in contextual awareness, user engagement, reasoning accuracy, and robustness in safety-critical scenarios. The integration of tailored prompting strategies and external knowledge sources markedly reduces LLM’s hallucinations and enhances the system’s reliability. The contributions have potential practical significance in domains like healthcare, law, and finance, where conversational AI must meet high standards of precision and trust. The proposed methods lay the groundwork for developing more reliable and context-aware conversational systems. Finally, the study acknowledges limitations in current evaluation frameworks and the rapid pace of research in LLM. Future research should focus on multimodal contexts, lightweight model deployment, and further advancements in interactive preference alignment for diverse real-world applications.
dc.identifier.urihttps://hdl.handle.net/10222/85391
dc.language.isoen
dc.subjectNatural Language
dc.subjectDialogue Systems
dc.titleAdvancing Reliability of LLM-based Conversational Systems

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MaksymTaranukhin2025.pdf
Size:
5.13 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.12 KB
Format:
Item-specific license agreed upon to submission
Description: