When users do not understand how machine learning models arrive at their outcomes, these models are opaque to the users. Providing explanations for the outcomes of these models is deemed an important step in making them transparent to users. In this thesis, we consider the explanation process as a two-step problem. The first step is to understand the feature attributions of the model outcome, and the second step is to incorporate these features with general human interpretability to form a comprehensible explanation. This thesis firstly aims to improve the stability of feature attributions and algorithmic efficiency, while maintaining the faithfulness to the underlying machine learning models; and lastly draw upon the research in philosophy, psychology and cognitive science, and generate human-interpretable explanations.