Interpretable machine learning models are crucial for ethical and transparent decision-making, requiring careful management of the complexity-accuracy trade-off. Additive models, a widely used class of interpretable models, are typically constructed with boosting algorithms that incrementally optimises an objective function. However, improper choices for this objective function as well as the overall greedy nature of boosting often lead to suboptimal models. This thesis, firstly, proposes a novel objective function to enhance the complexity-accuracy trade-off of boosting. Secondly, it validates this improvement through case studies in nanomaterial synthesis. Finally, it proves that the worst-case approximation gap of boosting is at least 1/2 and investigates to what degree this gap can be closed with simulated annealing and mixed integer programming approaches.