XGBoost excels in machine learning on tabular data, but its tree-based models are hard to interpret. This talk introduces two leading interpretability techniques: SHAP (SHapley Additive exPlanations) and EBM (Explainable Boosting Machine). We'll explore their theoretical foundations, compare their strengths and limitations, and demonstrate their implementation using Python's shap and interpret-ml packages. Attendees will learn how these methods work and how to apply them to explain gradient boosting models.