Trustworthiness Evaluation for Machine Learning Models in AI-Driven Closed-Loop Labs

Author: Yao Fehlis - Head of AI, Artificial, Inc

Artificial’s mission is to unlock the full potential of lab automation to drive scientific discovery. Recognizing that traditional lab workflows are often complex and time-consuming, Artificial aims to reduce bottlenecks and enable scientists to focus on high-value tasks rather than manual processes. This is achieved by integrating various systems and automating repetitive tasks.

Closed-loop lab

As shown in the diagram, an AI-driven closed-loop laboratory workflow integrates machine learning (ML) models to optimize experimental processes. At Artificial, we focus on three areas in our closed-loop lab AI toolkit:

  • Existing ML models integration: We are capable of integrating existing ML models into the automated scientific labs. ML models generate insights and suggest new experimental parameters for optimization. We seamlessly integrate customers’ in-house models as well as open-source libraries.
  • Visualization: The results of the analysis are displayed using visualization tools to help users interpret the data and model outcomes.
  • Add-on Values: We bring in additional values into the customers’ pipeline.
    • Trustworthiness Evaluation: A system that evaluates the reliability or trustworthiness of the AI-generated suggestions.
    • Better Parameter Suggestions: Once the evaluation scores, we are able to suggest better parameters for more effective outcomes.

In summary, this diagram represents an automated, AI-driven lab setup where experimental data is analyzed through machine learning, and the results guide further experimentation in a feedback loop, with an added focus on improving parameter quality and assessing the reliability of outcomes.

In collaboration with the University of Toronto, Artificial is working on bringing Trustworthy AI to the next generation of labs in the creation of the TRustworthy AI Toolkit for Science (TRAITS).  Trustworthy AI is essential for closed-loop labs because it centers the user experience and addresses questions such as “Is an experiment reasonable?”  One of the challenges today is that there is no existing quantitative evaluation of how trustworthy an AI/ML system truly is.

The TRustworthy AI Toolkit for Science project was created by Dr. Ashley Dale at the AUTOnomous Discovery of ALloys (AUTODIAL) group led by Prof. Jason Hattrick-Simpers at the University of Toronto. The project addresses the scientist’s need for quantitative tools by creating an open-source software tool kit. TRAITS covers six big areas, including transparency, robustness, abridgedness, interpretability, transferability, and stability. Utilizing TRAITS in a closed-loop lab has the potential to provide the user with a trustworthiness score for the model and a confidence score for the prediction.  This would help a user identify cases where a model is confidently wrong and differentiate between low-risk-low-reward/high-risk-high-reward models when designing self-driving lab pipelines.

We look forward to our collaboration with Prof. Jason Hattrick-Simpers and Dr. Ashley Dale, Schmidt AI in Science Postdoctoral Fellow at the University of Toronto and Acceleration Consortium.  Together we are bringing the future of technology to the next generation of labs.

crossmenuchevron-downarrow-leftarrow-right