AI Case Study
Scientists aim to eliminate animal testing by automating the proccess based on existing knowledge about chemical interactions using machine learning
Scientists at Johns Hopkins University, ToxTrack, UL Product Supply Chain Intelligence and the University of Konstanz
Industry
Public And Social Sector
Education And Academia
Project Overview
"An artificial intelligence system published in the research journal Toxicological Sciences shows that it might be possible to automate some tests using the knowledge about chemical interactions we already have. The AI was trained to predict how toxic tens of thousands of unknown chemicals could be, based on previous animal tests, and the algorithm’s results were shown to be as accurate as live animal tests.
The algorithm can predict results from nine different tests, from skin corrosion to eye irritation, which authors say comprised 57% of all animal testing done in the EU in 2011.
The team, led by John Hopkins toxicologist Thomas Hartung, has been working on translating data from the EU into usable data for AI since 2014, Nature reports. The dataset used in this AI system uses more than 850,000 chemical attributes, such as the degrees to which they are explosive or carcinogenic.
A task force of 16 US government agencies invited Hartung’s team and other AI researchers working on similar problems to demonstrate their systems’ effectiveness at the National Institutes of Health, according to Nature. The exercise resulted in a toxicity model for 40,000 chemicals that will be released to the public later this year."
Reported Results
"The algorithm’s results were shown to be as accurate as live animal tests."
Technology
"The novel models called RASARs (read-across structure activity relationship) use binary fingerprints and Jaccard distance to define chemical similarity. A large chemical similarity adjacency matrix is constructed from this similarity metric and is used to derive feature vectors for supervised learning. We show results on 9 health hazards from 2 kinds of RASARs—“Simple” and “Data Fusion”. The “Simple” RASAR seeks to duplicate the traditional read-across method, predicting hazard from chemical analogs with known hazard data. The “Data Fusion” RASAR extends this concept by creating large feature vectors from all available property data rather than only the modeled hazard. "Simple RASAR models tested in cross-validation achieve 70%–80% balanced accuracies with constraints on tested compounds. Cross validation of data fusion RASARs show balanced accuracies in the 80%–95% range across 9 health hazards with no constraints on tested compounds.
Read-Across Structure Activity Relationship: RASARs are constructed in an unsupervised learning step and a supervised learning step.
In the unsupervised learning step distances between all chemi- cals in the modeled database are built.
The supervised learning step applies a supervised learning model to the vectors generated by the unsupervised learning step. In this work we use logistic regression in the Simple RASAR and a random forest in the Data Fusion RASAR." (paper)
Function
R And D
Core Research And Development
Background
"Scientists test new chemical compounds on animals because we still don’t completely understand the world around us. New compounds might interact with living cells in unexpected ways, causing unforeseen harm."
The team had in the past "created a chemical hazard database via natural language processing of dossiers submitted to the European Chemical Agency with approximately 10 000 chemicals. We identified repeat OECD guideline tests to establish reproducibility of acute oral and dermal toxicity, eye and skin irritation, mutagenicity and skin sensitization. Based on 350–700þ chemicals each, the probability that an OECD guideline animal test would output the same result in a repeat test was 78%–96% (sensitivity 50%–87%)." (paper) They are now working with an extended dataset.
Benefits
Data
"More than 850,000 chemical attributes, such as the degrees to which they are explosive or carcinogenic."