How can we trust black box AI models Presentation Synopsis: Explainability is often proposed as the principle means of achieving AI model trust. However, there are many means of determining whether we can trust AI, and explainability is not always the best one. This lecture will lay out a framework to understand when explainability is needed, and when it is not, as well as its appropriate uses, limitations, and alternatives.
Speaker Biography: Greg is Senior Director, AI Model Risk Management at Royal Bank of Canada, working on validation, governance, R&D, and IT for machine learning models. He is a leading voice in industry discussions around AI model fairness, explainibility, and robustness. He has recently moved to Halifax where he is learning how to surf in his spare time.