The Pillar Post

Dr. Anna Barker at #AACR18 Cancer biomarkers: Moving from promise to reality

Dr. Anna Barker at #AACR18 Cancer biomarkers: Moving from promise to reality

May 24, 2018 by Dale Yuzuki

The universe of cancer research is so large and so wide-ranging that attending a conference like the recent American Association for Cancer Research (#AACR18) in Chicago you may discover rising new scientists or become introduced to others whose work you may not be familiar. Dr. Anna Barker was the Deputy Director of the National Cancer Institute from 2002-2010, and Deputy Director of Strategic Scientific Initiatives, helping organize major projects like The Cancer Genome Atlas (TCGA), the cancer HUman Biobank (caHUB) and the cancer BioInformatics Grid (caBIG).

She is now the director of the Transformative Healthcare Initiative at the Arizona State University, and received an award for the 2018 AACR Distinguished Award for Exceptional Leadership in Cancer Science Policy and Advocacy, and continues to serve on several AACR committees. She gave a presentation at AACR called ‘Cancer biomarkers: moving from promise to reality’ and notes from her talk are below.

Definition of the term “Biomarker”

She began by saying about cancer biomarkers, ‘Truth is, we have almost none. I was going to give my talk the title, ‘Cancer biomarkers: looking in all the wrong places’.

The definition of biomarker was clarified in 2015 by the FDA, with an online resource called The Biomarkers, EndpointS, and other Tools resource glossary (BEST). The BEST document from the FDA can be found here (PDF) and defines a biomarker with the following statement:

A defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions. Molecular, histologic, radiographic, or physiologic characteristics are types of biomarkers. A biomarker is not an assessment of how a patient feels, functions, or survives.

The 1989 ‘Prentice Criteria’ defined the familiar quantitative dimensions of biomarker: specificity, sensitivity, precision, positive predictive value, percent positive agreement. Yet the true value of biomarkers (as the FDA describes it) is in its ‘Fit for Purpose’; its context of use.

Classification of biomarkers

Cancer biomarkers are classified by their clinical utility: for disease risk, for diagnosing cancer, for monitoring cancer progression, for determining prognosis, and probably most important for pharmacodynamic properties, the biologic response of the system following therapeutic intervention. Patients are often overtreated due to lack of toxicity biomarkers.

She said, “Precision medicine has moved along because NGS (next-generation sequencing) has been getting better; proteomics as a field has not done well at all.” And the best biomarkers we have currently are imaging-based, such as Magnetic Resonance Imaging (MRI), Computerized Tomography (CT), and FluoroDeoxyGlucose Positron Emission Tomography (FDG-PET). These technologies are readily available (in the developed world at least) and have wide usage for cancer diagnosis, monitoring, response to therapy, and disease prognosis.

Another interesting comment she made was that the “FDA does not care if your diagnostic is a ‘black box’” as long as you can demonstrate its effectiveness at diagnosis, prognosis etc. along with its clinical utility. Clinically useful biomarkers are “almost always complex”; because human biology is complex, and thus have incredible potential.

The ideal biomarker

She quoted Dr. Janet Woodcock, the current Director of the FDA’s Center for Drug Evaluation and Research (CDER), for asking the question “Why can’t you tell me if my medicine is working? …Biomarkers are the key tools that we need.”

Dr. Barker then brought up a very useful illustration of an ideal biomarker: measurement of blood pressure. It correlates very well with the status of cardiovascular health, whether diseased or healthy; it is easy to perform and inexpensive to access. It measures the integrated expression of a complex, adaptive system with quantitative information.

The ‘Omics Revolution

She was instrumental in initiating The Cancer Genome Atlas project, announced in late 2005 with an initial 3-year, $100M pilot, with its first major findings published in 2008. (A comprehensive history is available.) [Link: https://cancergenome.nih.gov/abouttcga/overview/history ] Now whole-genome sequencing, at the $1000 or less price-points, has resulted in millions of samples. At current sample rates, data volumes are currently the 1000 PetaBase per year, and said the data quality now is ‘the best we’ve ever had’.

Dr. Barker then showed a figure from the first TCGA paper, presenting results on glioblastoma multiforme from 2008. She called it ‘the kind of science that changes the world’ with new identification of allowed subtypes of cancers, identification of novel major signalling pathways, and a starting-place to understand the interaction between these pathways. However, cancer patient treatment has not changed.

Limitations and Requirements

Of approximately 150,000 claimed biomarkers, there are an estimated 100 biomarkers in routine clinical usage. (For details, see this reference.) There are just a handful of approved Companion Diagnostics. Naturally oncology clinical trials have massive attrition, take years to complete, and have high costs associated with them, with a low chance of success.

If there were high-value, biologically- and clinically-relevant biomarkers, she said, there would be smarter study designs requiring less time and fewer clinical trial volunteers needed. Biomarkers fail in these companion diagnostics trials as they do not address the complexity of the disease, are not adaptive to track the dynamic nature of cancer, are not ubiquitous nor sample-friendly in operation.

Further, she said, biomarkers need to be amenable to be developed into simple-to-use technology, that is cost-effective and fit easily into the clinical-care workflow.

The National Biomarker Development Alliance

To advance these goals, a non-profit organization has been setup: the National Biomarker Development Alliance. In one of their workshop reports, the following six barriers to adoption were listed (quoting from the full report here):

1. Currently the supporting evidence for a biomarker is often limited, making the decision to test a biomarker hypothesis a major challenge.
2. If the drug target is not the biomarker, then evidence that the biomarker is clinically useful is often acquired only after drug clinical trials commence or conclude, creating a major timeline issue.
3. The prevalence of the identified biomarker in broader enrolled populations in clinical studies is often unknown.
4. The prevalence of a biomarker may not be the same across all populations.
5. The biology of the tumor may not be the same across all populations, raising the question of how to set cut-off points for biomarker assays results that can inform clinical decisions.
6. If biology informs the cut-off points for making yes/no decisions based on biomarker assay results, will that cut-off be the same for different therapeutics directed to the same pathway that employs the same biomarker?

In addition to these clinically-relevant biomarker questions are other challenges and barriers, including inadequate experimental design, poor quality sample specimens for testing, poor quality data and lack of data standards, and inadequate analytics. With regard to this last point, she mentioned that the ‘lack of data standards is killing us’, and elaborated that standards and data evidence is needed at every step of the process, from early discovery through discovery for translation into the clinic, assay development, assay performance, biomarker qualification, and biomarker validation.

Conclusions: the need for gold-standard Big Data

Dr. Barker concluded her presentation with a description of cancer as ‘malignant snowflakes’: these are individual cancers that carry multiple unique mutations as complex adaptive systems. Showing the famous ‘Wagle Slide’ (Wagle N and Garraway L et al., J. Clin. Oncol. 2011 “Dissecting Therapeutic Resistance to RAF Inhibition in Melanoma by Tumor Genomic Profiling”, figure 2) as an illustration of the evolution of resistance from a single individual, where cancer as a complex system will produce emergent properties like metastasis over time, with the accumulation of mutations resulting in complex interactions with external environmental variables that include exposure to chemical agents, viruses, hormones, and nutrients.

As a complex system that changes in 3-dimentional space over time, the biomarkers measured ‘must reflect complex contexts at different scales over time’. And in a world of big data, where the volume, velocity and variety of major data types are increasing, the future is in deep learning and machine learning.

In this world of big data, you need to ask the right questions, in addition to being well-grounded in information theory, the nature of information. Information is not data.

She concluded by stating a genuine need: gold-standard databases, as deep learning and machine learning (modes of Artificial Intelligence or AI) cannot work with poor-quality data.

Resources: BEST resource from the FDA-NIH Biomarker Working Group:  https://www.ncbi.nlm.nih.gov/books/NBK326791/
National Biomarker Development Alliance http://www.nbdabiomarkers.org/