Posted on Lab Informatics. 25 June, 2019
Drug discovery is a challenging endeavor. The first step – finding compounds with the desired medicinal effect on the target pathogens – has traditionally involved the automated high-throughput screening of large compound libraries to identify “hits” with biological activity. Once identified, promising compounds are put through a process called lead generation to evaluate criteria such as their dose-response curve, cellular efficacy, affinity towards the target, reactivity with other compounds, cytotoxicity, etc.
Experimental techniques like HTS and parallel synthesis have dramatically increased the amount of available compound activity and assay data, making how to effectively and efficiently mine this data a central problem in drug discovery. For some time now, pharmaceutical companies have been investing in in silico methods in hopes of dramatically cutting the cost and time of drug discovery. In the last decade or so, the industry has been adopting various machine learning techniques to efficiently extract useful information from the data.
One approach which has shown much promise in this regard is known as “deep learning”. Evolved from the research on artificial neural networks applied to image recognition, deep learning (DL) algorithms have demonstrated significant success in the last decade in a wide range of applications – computer games, computer vision, speech recognition, natural language processing, self-driving cars, etc. Researchers in the pharmaceutical industry are now applying this technology to solve diverse problems in drug discovery. In this blog, we provide a brief overview of DL methodology, and discuss some of the current use cases and challenges for DL in drug discovery.
Machine learning uses computer programs (i.e., algorithms) to parse data, learn from it, and reach a conclusion or prediction about the data. Machine learning algorithms are “trained” initially by being fed large amounts of data, allowing the system to slowly improve and produce better results as it collects feedback on the accuracy of its responses. As the machine learns, it makes changes to how tasks are executed for better performance. This means that the result of a given input should change and improve as the machine gains experience.
If a machine learning algorithm executes its task in one step, where input data is transformed directly into a result in a straightforward process, then the learning machine has what would be called a shallow architecture. On the other hand, if the algorithm executes tasks in a number of layers, where the outputs of the first layer is the input to the second, and so on, then the machine has what would be called a deep architecture.
Deep learning is a type of machine learning algorithm using deep architecture that has demonstrated success in finding obscure relationships in massive data sets. Deep learning algorithms use artificial neural networks (ANN) to transform input data into accurate predictions by applying a nonlinear function to a weighted sum of the inputs. Unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance and where the organizing is highly dynamic, deep learning artificial neural networks have discrete layers, connections, and directions of data propagation and are generally more static. Deep learning algorithms utilize the intermediate outputs of one layer of the ANN as the inputs into the next layer in a weighted fashion, with the neural net learning by varying the weights of the network in order to minimize the difference between its predictions and the desired values.
As training deep networks tends to consume significant computer resources and data, DL has only developed into a leading artificial intelligence technology in recent years with the emergence of big data and powerful CPU and GPU hardware. A turning point in the application of DL was the ImageNet image recognition contest in 2012: in that year, a team from the University of Toronto in Canada entered a deep learning algorithm called SuperVision that soundly beat the competition, dramatically dropping the image recognition error rate from 25.8% the previous year to 16.4% (it was further optimized and in 2015 exceeded the human classification error rate of approximately 5%). This was the shot heard round the artificial intelligence world – sparking a goldrush to apply DL to solve problems in many different areas.
Researchers are now exploring DL approaches to enhance drug discovery in several different areas. A few examples include:
Predicting Chemical Reactions. Deep learning algorithms have demonstrated good success in predicting chemical reactions between candidate compounds and target molecules. This can be useful for predicting toxicity of a drug, for example. These models allow biomedical engineers to quickly evaluate the design of new synthetic compounds by querying a trained deep neural network that can predict how candidate compounds interact with target molecules.
In one recent example, Blaschke et al. utilized a DL algorithm to successfully generate novel structures with predicted activity against dopamine receptor type 2. Most sizeable biopharma companies now have ongoing collaborations or internal programs utilizing this predictive capability of DL to find new disease treatments.
Predicting Docking Properties. Another important aspect of de novo drug design is finding compounds that can bind effectively to the target molecule to deliver their pharmacologic effects. DL algorithms can be used to screen a collection of virtual compounds to determine which can potentially be good binders.
A recent example of a deep learning approach improved docking-based virtual screening using context data such as distances, atom types, atomic partial charges and amino acids to evaluate docking potential for 40 different receptors. This was accomplished without the use of feature engineering, which is a costly and time-consuming domain-specific task.
Image Recognition. Image recognition by machines trained via deep learning has progressed to the point where it outperforms humans is some scenarios. Image recognition DL algorithms can be used to identify indicators for cancer in blood or tumors in MRI scans, for example.
The Google Brain team recently demonstrated the potential of this technology when they announced a computer vision system for the identification of protein crystallization that performed at an accuracy of 94 percent. Protein crystallization determines the shape of cells (which is integral to determining its function) and can thus play a role in discovery of drugs to treat various illnesses.
Another example of successful application of DL in image recognition comes from Silicon Valley based Baidu Research, who recently announced the development of an algorithm that could accurately identify metastasized breast cancer.
Lead Optimization. DL algorithms also can provide assistance in lead optimization, the process that follows lead generation where drug candidates are further evaluated in a variety of ways. One important technique in lead optimization is the use of Quantitative Structure-Activity Relationships (QSAR). In this step, DL algorithms are used to create functional modifications of chemical structures to improve the efficacy and safety of a drug candidate.
One example of this approach in action comes from researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Electrical Engineering and Computer Science (EECS). These scientists have developed a model that better selects lead molecule candidates based on desired properties. In addition, the model modifies the molecular structure needed to achieve a higher potency, while ensuring that the molecule is still chemically valid.
While DL holds much promise to accelerate drug discovery processes, there are still several hurdles to broad use of DL technology in this area. Experimental results are often difficult and expensive to obtain in drug discovery. This means that data is often limited on the most interesting assays, and this complicates the training of accurate deep learning algorithms. In addition, biological data is often noisy and uncertain, which can lead to significant uncertainty in predictions made by deep learning models. Also, with respect to deep learning, there are concerns regarding the fact that DL predictions are “black-box” with little traceability and supporting information.
These concerns highlight the fact that a deep learning algorithm is only as good as the data it is fed. Clean, robust and large datasets are necessary for AI to generate effective conclusions, and companies that are considering AI solutions must first ensure that their overall systems architecture is capable of providing the quality data which is needed. Nonetheless, increased implementation of deep learning to the drug discovery arena is forecast to significantly improve the effectiveness of the researchers working to develop medicines which ultimately serve to improve the lives of patients.
Copyright (C) 2020