Abstract:
This doctoral thesis is a work of research into a broad spectrum of diverse fields with symbolic logic serving as the focal point. We begin this introduction in reverse order so that the interdisciplinary nature of the thesis becomes clearer. The third and final part of the thesis deals with Explainable Artificial Intelligence (XAI); specifically, it deals with symbolic artificial intelligence. We created an early form of an information system which could produce explanations in the framework of statistical hypothesis testing. While our model could be applied primarily on classic artificial intelligence algorithms we chose to conduct our proof of concept on the field of hypothesis testing, as hypothesis tests are the most commonly applied statistical methods in medical research. Therefore, it is very useful in order to both minimize errors in interpreting statistical results and improve the ways of interpreting those results. The first part of this thesis deals with theory behind practice. We examine how we could expand the expressing capabilities of already existing logical systems by syntactically expanding each logical system and, furthermore, creating new semantics by utilizing semantic topologies. Finally, the second part of the thesis deals with the concept of generic constructions, a tool of mathematical logic which provides both syntactic and semantic constructions.