TrustML facilitates development of trustworthy machine-learning-based systems, i.e., systems that are reliable, secure, explainable, and ethical.

The cluster examines trust-related requirements in several life-critical domains, including medicine and aerospace, and investigates solutions for building trustworthy systems that professionals and the general public can reliably adopt. 

Why Research in Trustworthy Machine Learning

Machine Learning (ML) is growing in importance to industry, the government, and society. Yet, widespread adoption of ML-based systems in many life-critical domains, e.g., healthcare and aerospace, is impeded by the lack of trust in these systems.

One major way to overcome the barrier of trust is to provide ways for professionals, such as doctors and scientists, to understand the reasons behind ML predictions rather than expecting the professionals to blindly follow the predictions. Further, ML-based systems have to provide security and safety guarantees, without which they are unfit for practical use, e.g., in control systems that operate complex equipment, robots, and drones. The ability to maintain data privacy is another crucial issue in domains that deal with human-centric and IP-protected data, e.g., population management. Ethics and fairness concerns are critical in domains such as finance and law, to ensure the systems do not perpetuate inequality and discriminatory biases.

Approach

To address this challenge, TrustML cluster brings together a diverse team of researchers from academia, industry, and the government, to (a) jointly investigate trust-centered requirements and pitfalls in a number of life-critical domains, including medicine, manufacturing, urban forestry, and control-system/aerospace, and (b) build techniques and processes for engineering trustworthy ML-based systems. 

Through interdisciplinary collaborations and partnerships, we aim to build foundations for developing trustworthy ML-based systems, i.e., systems that are reliable, secure, privacy-preserving, explainable, and ethical. Such an effort will help prevent ML failures and biases, improving the trust of professionals and the general public in ML-based decision-making processes.


UBC Crest The official logo of the University of British Columbia. Urgent Message An exclamation mark in a speech bubble. Caret An arrowhead indicating direction. Arrow An arrow indicating direction. Arrow in Circle An arrow indicating direction. Arrow in Circle An arrow indicating direction. Chats Two speech clouds. Facebook The logo for the Facebook social media service. Information The letter 'i' in a circle. Instagram The logo for the Instagram social media service. External Link An arrow entering a square. Linkedin The logo for the LinkedIn social media service. Location Pin A map location pin. Mail An envelope. Menu Three horizontal lines indicating a menu. Minus A minus sign. Telephone An antique telephone. Plus A plus symbol indicating more or the ability to add. Search A magnifying glass. Twitter The logo for the Twitter social media service. Youtube The logo for the YouTube video sharing service.