TruEra, provider of a suite of AI quality solutions, is releasing TruLens, an open source explainability software tool for machine learning models that are based on neural networks.
TruLens is a library for deep neural networks that provides a uniform API for explaining Tensorflow, Pytorch, and Keras models. The software is freely available for download, and comes with documentation and a developer community to further its development and use.
The library provides a coherent, consistent approach to explaining deep neural networks drawing on published research.
It natively supports internal explanations that surface important concepts learnt by network units, e.g. showing what visual concepts within images a facial recognition model uses to identify people or a radiology diagnostic model uses to identify medical conditions.
The library draws on a series of published academic papers. A key set of ideas stems from the paper Influence-Directed Explanations for Deep Convolutional Networks authored by the creators of the library at Carnegie Mellon University.
The library also provides support for a set of other popular explainability techniques created by the research community, including Saliency Maps, Integrated Gradients, and SmoothGrad, that are extensively used in computer vision and natural language processing use cases.
TruLens has been in use across a wide range of real-world use cases to explain deep learning models. Use cases for neural network models include:
- Computer vision: identifying an individual person, animal, or object in a series of photos; categorizing types of damage for insurance claims or reviewing medical images
- Natural language processing: identifying malicious speech, social media post analytics, predictive text, or smart assistants
- Forecasting: using multiple inputs, including text and numerical inputs, to forecast future events, such as financial outcome probabilities
- Personalized recommendations: use of past behavior to predict a user’s interest in other products
TruLens provides the ability to explain precisely how these models are performing, which allows developers to better understand and refine their models in the development phase, as well as to assess ongoing model performance and fix models once they are put to real world use.
“TruLens reflects the over eight years of explainability research that this team has developed both at Carnegie Mellon University and at TruEra,” said Anupam Datta, co-founder, president, and chief scientist, TruEra. “This means that it starts as a robust, targeted solution with a strong lineage. There is also a team of deeply knowledgeable people standing by to help out developers as they explore the use of TruLens. We are looking forward to building an active developer community around TruLens.”
For more information about this news, visit https://truera.com.