Embracing the “Trial and Error” approach in drug discovery
by Marija (Masha) Zecevic
Nassim Taleb is an essayist, scholar and statistician probably best known for his book on randomness and risk “The Black Swan”. Although his theories came from his background in the financial markets and trading, they can also be applied to other fields of human knowledge, including medical sciences and drug discovery.
In his recent lecture at the Stanford University, Taleb elaborated on his belief that the role of knowledge is highly overstated and that most of the technical and scientific progress humanity has achieved is the product of the trial and error approach.
“The biggest improvements in drug discovery is when people did not know what they were doing, but knew that. Today we are directing. We have genomics, proteomics and so on but have not been able to discover as much as in the past by doing purely trial and error” said Taleb during his lecture.
Taleb described the “trial and error” in science through the example of cooking. One does not need to take a chemistry class to make a tasty meal. You try and taste and eventually get something good like sour dough or yoghurt. It is only retrospectively that we figure out what process is actually responsible for the final result.
My question is the following: Have we actually gone back to “trial and error” approach today in the drug discovery?
Some of the platform technology biotech start ups seem to be practicing such a “trial and error” approach by putting together medicinal chemistry, biology and bioinformatics capabilities into their tool kits. Specifically, medicinal chemists generate libraries of compounds, biologists measure the effect in an ideally predictive in vitro system and bioinformatics provides the tools to understand what cellular pathways are behind the effect observed (Figure 1).
Figure 1: “Trial and Error” drug discovery process: Lead compounds are selected from a chemical library (1) mainly on the basis of their ability to elicit a desired biological effect (3) for example chloride conductance of the CFTR channel or ability to kill a Her2 expressing tumor cell line. Same cells used in the assay (2) are also analyzed with powerful new technologies that are able to measure the impact of drugs on the genome, transcriptome, proteome and metabolome (3). The approach does indeed sound like something Taleb describes: “trial and error” and only subsequently figuring out what is actually happening on a biological level.
Despite high expectations, powerful bioinformatics tools have let many down in their ability to speed up the drug discovery process and improve the numbers of new drugs approved. The most accepted theory as to why this has happened is our inability to comprehend and interpret the vast amount of data generated because we simply do not comprehend the biology. The use of the so called “omics” tools has been mainly in discovering new biological targets by looking at correlations, rather than causality, with specific pathological states. New targets have too often proven to be challenging because there was no understanding of the biology involved. In a way, the “omics” tools have almost come ahead of their time.
Could the new approach, where “omics” is studied last and only once the drug-effect relationship has been observed, be truly transformational and finally speed up the drug discovery process? Have we found a way to accelerate the “trial and error” mechanism? Maybe “omics” tools are allowing us embrace our lack of knowledge by providing the retrospective analysis in almost real time. Or to use Taleb’s analogy, we can cook the meal until we get it right and once we do, bioinformatics will tell us almost immediately why we were successful.
Omix: The English-language neologism omics informally refers to a field of study in biology ending in -omics, such as genomics, proteomics or metabolomics. The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome or metabolome respectively.