In recent years, machine learning (ML) and Artificial Intelligence (AI) techniques have experienced rapid advances, reshaping numerous aspects of human lives. Despite the considerable advances in machine learning (ML) that have rapidly integrate complex ML models into our daily lives, they are considered “black-box” models, lacking transparency in the outputs they produce. As a result, there is a growing demand for transparency and accountability in decision-making processes since lacking them can lead to critical issues related to fairness and bias, as well as regulatory compliance. This arises the need for rapid development in the research field of eXplainable AI (XAI). The thesis is motivated by rapid advances of the XAI field and the shortcomings of prevailing model-agnostic approaches, focusing on formal explainability and aiming to tackle the challenges in model-agnostic methods as well as enhance formal explainability.