This page contains a Flash digital edition of a book.
inside view

On balance O

ne of the big challenges in drug discovery is that many factors must come together in the same compound for it to be a successful, safe and

efficacious drug. It is well understood that the appropriate physical and chemical properties must be considered, such as ADME properties, metabolic stability and an absence of toxicity – at least at therapeutic concentrations. Te issue is that these requirements compete with one another and as such very rarely fall into place. Increasing the potency, for example, may

require increasing the lipophilicity, which unfortunately has knock-on effects in terms of the ADME properties and toxicity. As a result of this, not only are predictive modelling techniques being used very early in the drug discovery process, but a wide range of experimental end-points are now being measured, such as early in-vitro toxicity. Tis poses a new challenge: data overload. Given the amount of data and the range of properties that need to be balanced, the question becomes how to use that information to quickly target the high-quality chemicals that will deliver a successful drug. How do we optimise all these different parameters and factors simultaneously? A number of approaches have been applied in order to answer these questions. Many approaches, described as multi-

parameter optimisation (MPO), are not unique to drug discovery. Tis provides the opportunity to use techniques that have been tried and tested within other disciplines – aerospace engineering, for example. Within drug discovery, the application of rules of thumb like Lipinski’s rule of five mean that chemists are working within an established space and therefore not introducing significant additional risk. Te problem is that these criteria are oſten treated as filters – should properties not match established parameters, they are simply discounted. Te danger of drawing such a hard distinction between a compound with a logP of 5.01 and one with a logP of 4.99, for example, is exacerbated by significant uncertainty within the data being used. Using 10 filters, each with an accuracy of 90 per cent, the probability of a perfect compound emerging is just 30 per cent.



An explicit consideration of the uncertainty in the underlying data needs to then be considered so that a confidence limit surrounds the decisions of which compounds to take forward. Tere are times, of course, when what

constitutes an appropriate balance is unknown. A commonly-used technique in this case is Pareto optimisation. Using this approach, the optimality of an outcome – a compound in this case – is defined in terms of not just one specific profile, but the range of possible optimal outcomes that represent different balances. By sampling a spectrum of options, chemists can explore what the best balance should be. All of these approaches are now being applied

in drug discovery and making quite a significant difference to the success of projects. Te cost of taking the initial idea through to a drug on the market is enormous and most of that cost is due to the fact that the vast majority of compounds that are synthesised and tested will never lead to a successful drug – typically, only one in 12 compounds going into development make it to the market. Te primary issue used to be a lack of appropriate ADME and pharmacokinetic properties, but that seems to have been reduced

Multi-parameter optimisation (MPO) in drug discovery is an area that’s had incredible uptake recently, says

Matt Segall, Optibrium’s CEO. He explains why

Te opportunity cost of throwing potentially good molecules away due to predictive or experimental error is potentially huge. Tese are simple approaches, however, and

far more sophisticated methods have emerged, such as desirability functions which take the different properties of interest and map them onto a degree of desirability. Tose values are then combined into an overall score that assesses all of the factors appropriately to suggest, on balance, how well that compound fits the objective. Tis approach offers a lot of flexibility as the criteria can be defined in a subtle trend rather than a harsh cut-off point.

by filtering those poor compounds out earlier in the process. Te issue now is toxicity – the specific problem has shiſted while the overall success rate remains the same. Being able to view all these factors simultaneously and very early on, however, means that a chemistry or set of compounds that could bring all those factors together can be targeted. Tere are numerous examples of this.

One project we worked on involved doing a retrospective analysis of a drug discovery project that had been running for more than five years in a large pharmaceutical company. More than 3,000 compounds had been synthesised and tested, and we were given the resulting data in a blind test. What was really interesting was that in the first half of the project the company had focused a lot of effort in one particular area of chemistry but not found the balance of properties needed to get in-vivo efficacy. Examining these compounds through probabilistic scoring, we saw that the area of chemistry was very high risk and that there was an alternative area more likely to be potent. Te company had eventually come to the

same conclusion, but we were able to show that by using predictive techniques to survey that entire area of chemistry to find the right set of compounds, choosing a subset to get experimental data on and then focusing narrowly on the ones to progress to actual in-vivo efficacy studies, that area of chemistry could have been reached with only 10 per cent of the effort. Chemists tend to focus on potent areas of

chemistry and then go back and determine what other properties should be present and whether any problems need to be fixed. Te issue here is that it ties them into very specific areas of chemistry, making it difficult to deal with any problems that may be encountered without long iterations. As early as possible, it is important to consider where in the chemistry is most likely to yield potent compounds, combined with all the other factors of concern. When dealing with large numbers of compounds, it’s still too expensive to measure all these properties, which is where predictive techniques come in. While no approach is perfect – and these techniques do contain uncertainties and statistical error – having the balance of probabilities on where the most likely area of chemistry lies for a successful candidate is invaluable.

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32