Worth Reading: Explainability is not a game However, the operation of complex ML models is most often inscrutable, with the consequence that decisions taken by ML models cannot be fathomed by human decision makers. It is therefore of importance to devise automated approaches to explain the predictions made by complex ML models. Related ← Worth Reading: Security Challenges of Intent-Based NetworkingWorth Reading: Where did DNSSEC go wrong? →