I disagree with your claim to look only at complex and even black-box models with hoped high accuracy, but without care about model transparency and explainability. This is very dangerous if you always do so. Post-hoc interpretability tools like feature importance, PDP/ALE, LIME/SHAP can only help with partial explanations, they can’t see the whole picture, and even worse they might be explaining the seemingly predictive but wrong algorithmic result. We call this the model risk.
Another comment is on the hoped high accuracy, which is not really the case when the data noise is high.