Novel Contract-based Runtime Explainability Framework for End-to-End Ensemble Machine Learning Serving
The growing complexity of end-to-end Machine Learning (ML) serving across the edge-cloud continuum has raised the necessity for runtime explainability to support service optimizations, transparency, and trustworthiness. That involves many challenges in managing ML service quality and engineering runtime explainability based on ML service contracts. Currently, consumers use ML services almost as a black box with insufficient explainability for not only inference decisions but also other contractual aspects, such as data/service quality and costs. The generic explainability for ML models is inadequate to explain the runtime ML usage for individual consumers. Moreover, ML-specific metrics have not been addressed in existing service contracts. In this work, we introduce a novel contract-based runtime explainability framework for end-to-end ensemble ML serving. The framework provides a comprehensive engineering toolset, including explainability constraints in ML contracts, report schemas, and interactions between ML consumers and the components of the ML serving for evaluating service quality with contract-based explanations. We develop new monitoring probes to measure ML-specific metrics on data quality, inference confidence, inference accuracy, and capture runtime ML usage. Finally, we present essential quality analyses via an observation agent. That interprets ML inferences and evaluates contributions of ML inference microservices, assisting ML serving optimization. The agent also integrates ML algorithms for detecting relations among metrics, supporting constraint developments. We demonstrate our work with two real-world applications for malware and object detection.
History
Journal/Conference/Book title
CAIN 2024: IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AIPublication date
2024-06-11Version
- Published