Taranukhin, Maksym2025-08-252025-08-252025-08-22https://hdl.handle.net/10222/85391In recent years, Large Language Models (LLMs) have made a significant leap forward, enabling more fluent and human-like interactions. However, LLM-based conversational systems continue to struggle with critical challenges that undermine their reliability and effectiveness, particularly in high-stakes domains. This thesis addresses some of the limitations by enhancing LLM-based conversational systems in four key areas: contextual and emotional understanding, proactive dialogue behaviour, complex reasoning, and uncertainty estimation. The central research problem is navigating nuanced conversational dynamics and producing accurate responses in sensitive applications. Therefore, the thesis poses two primary questions: (1) How can LLMs be improved to better manage dialogue context, initiative, reasoning, and safety when uncertain? (2) What is the measurable impact of these enhancements on user interaction and system performance? A mixed-methods approach combines empirical evaluation with computational modeling. Novel frameworks are introduced for stance detection, user input understanding, uncertainty estimation, and knowledge augmented reasoning. The proposed methods are implemented using a wide variety of techniques such as in-context learning, evidential networks and external knowledge integration. The results indicate significant gains in contextual awareness, user engagement, reasoning accuracy, and robustness in safety-critical scenarios. The integration of tailored prompting strategies and external knowledge sources markedly reduces LLM’s hallucinations and enhances the system’s reliability. The contributions have potential practical significance in domains like healthcare, law, and finance, where conversational AI must meet high standards of precision and trust. The proposed methods lay the groundwork for developing more reliable and context-aware conversational systems. Finally, the study acknowledges limitations in current evaluation frameworks and the rapid pace of research in LLM. Future research should focus on multimodal contexts, lightweight model deployment, and further advancements in interactive preference alignment for diverse real-world applications.enNatural LanguageDialogue SystemsAdvancing Reliability of LLM-based Conversational Systems