Target safety assessments – identifying risk early
Posted: 18 November 2021 | Drug Target Review | No comments yet
With an ever-increasing emphasis on minimising drug candidate attrition, scientists are focusing on target safety at earlier points in the research process. In this exclusive interview, Drug Target Review spoke with Dr Gordon Baxter, Chief Scientific Officer of Instem, to learn how target safety assessments can enable researchers to quickly identify and assess unintended adverse consequences of potential treatments before expensive investment is made.
Can you explain what a target safety assessment (TSA) is?
A TSA aims to assess the safety implications of modulating a specific target (gene, mRNA or protein) by antagonising or attenuating the function of the target, which is generally the mechanism of most drugs. A TSA is a dossier of information about the target, relating to safety and risk, ie, the potential downstream risk of interfering with the protein in some way. Modern approaches rely on completion through computing, with humans only getting involved to check the validity of the findings. Using computers brings a wealth of advantages: they are tireless, can look at massive volumes of data, are much more accurate than humans and do not need to rest or have a coffee! Importantly, computers also remember what has already been done. This is useful as it enables us to reuse information and track it with other targets in the same pathway.
Can you expand on why TSAs are needed?
…I think what we will see is an increased adoption of the TSA as a critical requirement during drug development”
Modern drug development can be hugely risky and there is always a balance to be struck between efficacy and safety, which needs to be addressed at the earliest possible stage. Establishing this dataset is far more feasible than in the past; more and more, relevant, available and free-to-access information can be obtained to infer risk. Some well-funded, highly visible organisations, like the Broad Institute, the National Center for Biotechnology Information (NCBI) and the European Bioinformatics Institute (EMBL-EBI), are producing huge amounts of data to assist with the task of assessing safety.
The key aim is to identify potential adverse consequences of target modulation and propose suitable risk mitigation strategies, with a view to confirming and maybe characterising some of those risks within clinical or pre-clinical study settings. It is really about anticipating, monitoring, addressing and managing adverse events.
How many targets do currently approved drugs interact with and how many more could be druggable?
Less than one thousand genes in the body are currently targeted by existing drugs. However, we understand there are upwards of 20,000 potentially druggable genes in the human genome. Many of these genes will likely co-operate and form complexes with one another, effectively meaning there are probably even more entities that you could target during drug development.
As we generate more knowledge, more data and more insight, the dossier around each gene is getting deeper, this means we can start connecting those genes to more subtle and innovative strategies for intervening with them and their pathways in endogenous systems. Effectively, the whole genome is getting more druggable, especially with technologies such as gene editing and researchers are expanding their drug target horizons.
What are the critical questions that can be addressed by a TSA?
One is the “disposition” of the target. In other words, by interrogating enormous, globally-available datasets we know what they are, what they look like, where they are positioned on the chromosome, how well conserved they are across species – a factor that might be important when selecting non-human experimental models, how much variation there is out there in the population (which impacts whether the drug will work or not) and whether that target is more or less sensitive to a particular intervention. The Human Genome Project and other related projects have generated a huge amount of information.
Our TSAs also include a “topographical review”, which seeks to understand where in the body the target is expressed and ‘active’. For example, if the gene is expressed at the mRNA level, the protein level or some functional level in the retina, then we can infer it has some role in retinal function. This gives us a starting point to identify its specific role. Our target topography can be supplied as a dynamic and interactive graphic and has been met with lots of enthusiasm across the scientific community.
Finally, there is the “physiological review”, which reveals the biological functions of the target, which pathways the gene or protein influences, as well as what influences it. That is really important, not just for TSAs, but for developing new therapeutic ideas, biomarker candidates, etc. Using a technology-enabled workflow, this physiological evaluation takes a consistent, unbiased and systematic approach to assessment of potential hazards through analysis of the scientific literature. This means that the TSAs can be compared with one another and with themselves, year on year as our scientific knowledge grows.
What is translational informatics and how is it used?
Translational informatics supports translation science and translational medicine by bringing together data, information and knowledge from research, development and clinical practice.
Our goal in the production of TSAs is to utilise relevant information from all points on the R&D continuum. We learn a lot from clinical practice and we also learn a lot from the more molecular focussed laboratory research.
We seek to predict and understand how a clinical response might be related to, or caused by, a change in the activity of a molecular target. To do this we have to gather information from any source, to evaluate it as a whole, see where the risks are and understand how they might be avoided.
What are some of the biggest challenges for TSAs? What are some of the biggest unknowns?
Although TSAs are not mandated by any regulatory authority, most drug developers believe it is a sensible thing to do. With the growing number of TSA requests and our growing number of collaborators, there can be practical issues with capacity management and logistics, because it is a new and rapidly developing service… and companies want results quickly! However, we have quickly responded by developing workflows that increase our capacity, reduce turnaround time and maintain a high quality output.
From a scientific viewpoint, a lack of information can be challenging. Sometimes, because TSAs are in the early research and development stages, there is not a lot of data regarding target modulation. We might have a target and know its structure, chromosome location, gene expression profiles, etc. While you can infer a lot from that bio-information, there is nothing substantive relating to the attenuation or augmentation of the target, which is the effect drugs will have.
Data in the public domain can present both an opportunity and a challenge. However, there are lots of strategies for assessing and integrating data to bring it together. Many carry out advanced data mining, but often it is not very well directed to the problem at hand. The challenge we must overcome is communicating, aligning that information, making it readable for humans, then presenting it in a way that is easy to understand; it must generate some real insight.
Data summarisation is another challenge. It poses so many questions, such as: how do we present data? How do we make it meaningful? How do we best draw people’s attention to where a problem might be? What kind of visualisation do we produce? The solution is in finding a way of summarising that data and presenting it in a clear way.
How do you see TSAs evolving in the future?
In the short to medium term, I think what we will see is an increased adoption of the TSA as a critical requirement during drug development. There will be a much more established and better defined process, which may even be mandated by regulatory authorities. We are likely to be using more formal, standard terminologies and validated processes, connecting upstream to more discovery related data and downstream to clinical data.
In the longer term, we may be connecting a standardised TSA process to downstream development processes using the Standard for Exchange of Non-clinical Data (SEND), as activities using this standard will be addressing any risk identified at the TSA stage. These frameworks are being rapidly adopted, driven by regulatory authorities and are fantastic strategies for making sure information can be exchanged, combined and understood.
I also think there will be increasing volumes of quality information and we will better understand some of the things we are just inferring now. Bearing in mind the massive translational gap between target discovery (large numbers) to final pharmaceutical product (much smaller numbers) we still need to know more about what translates and why. Therefore, I think TSAs will become even more important in the future.
Dr Gordon Baxter is the Chief Scientific Officer of Instem. Gordon has held several senior discovery positions in major pharmaceutical companies. He has a track record of innovation in drug discovery and has contributed to the development of numerous marketed drugs. The architect and driving force behind Instem’s entire approach to TSAs, Gordon is an outspoken advocate for the efficient exploitation of data to generate new therapeutic insights.
Related topics
Drug Development, Informatics, Molecular Targets, Target Validation, Technology
Related organisations
Instem
Related people
Dr Gordon Baxter