[Tralau T, Riebeling C, Pirow R, Oelgeschläger M, Seiler A, Liebsch M, Luch A. Wind of change challenges toxicological regulators. Environ Health Perspect. 2012 Nov;120(11):1489-94.]
“BACKGROUND: In biomedical research, the past two decades have seen the advent of in vitro model systems based on stem cells, humanized cell lines, and engineered organotypic tissues, as well as numerous cellular assays based on primarily established tumor-derived cell lines and their genetically modified derivatives.
OBJECTIVE: There are high hopes that these systems might replace the need for animal testing in regulatory toxicology. However, despite increasing pressure in recent years to reduce animal testing, regulators are still reluctant to adopt in vitro approaches on a large scale. It thus seems appropriate to consider how we could realistically perform regulatory toxicity testing using in vitro assays only.
DISCUSSION AND CONCLUSION: Here, we suggest an in vitro-only approach for regulatory testing that will benefit consumers, industry, and regulators alike.”
Nel testo completo, sulla validazione:
“However, a humanized in vitro testing scheme, or “Tox-Test Dummy” (Figure 1), will face a different dilemma: specifically, a limited data set for validation. Human data usually originate from accidents, individual case reports, or retrospective studies. Using existing in vivo animal data for comparison will not necessarily solve this problem. Out of several thousand developmental toxicants identified in animal studies, only about 50 have been reported to exhibit embryotoxic effects in humans (Schardein and Keller 1989). A solution to this problem could lie in the analysis and comparison of adversely affected biochemical pathways in humans and animals, respectively. The ToxCast program, as part of Tox21, currently identifies biological pathways that are altered as a consequence of toxicological insult (Chandler et al. 2011; Kleinstreuer et al. 2011) Linked with other available in vitro and in vivo data from human exposure and animal testing, these data can be used not only to improve understanding of the underlying biochemistry, but also to elucidate differences and similarities between species. Ultimately, such a detailed understanding of the affected pathways across species might be used to validate the human relevance of in vitro assays.”
Nel testo, i metodi alternativi:
“Typical routes of exposure are dermal, inhalation, and ingestion. Absorption through the mucous membranes can be modeled in silico. Likewise, there are validated methods to measure dermal absorption in vitro, and cell systems for other barriers are available as well (Adler et al. 2011; European Commission 2008). If absorbed, the chemical is likely to reach the bloodstream. Using physiologically based toxicokinetic (PBTK) modeling, it is then possible to predict organ exposure levels and thereby establish relevant concentrations for any subsequent in vitrotesting (Figure 1) (Mielke et al. 2011). The potential of coupling organ-specific PBTK to downstream assays such as gene arrays was recently demonstrated by Meyer et al. (2012)who used this approach to investigate the in vivo activity of pravastatin.
One of the most important organs will undoubtedly be the liver, where phase I metabolism facilitates excretion but also increases the toxicity of some substances. Hence, liver metabolism is currently one of the most important research areas of in vitro testing. Although current high-throughput projects such as ToxCast typically do not include xenobiotic metabolism in their in vitro assays, they recognize that this issue is critical to the success of their efforts. Currently available systems for mimicking liver metabolism include the use of liver homogenate (S9 extracts), transgenic cell lines, hepatocyte-like cell monolayers, and 3D organotypic cultures (Adler et al. 2011; Esch et al. 2011; Giri et al. 2011; Landsiedel et al. 2011). The S9 extracts are frequently sourced from rodents because of the limited availability of pathologically unaffected human liver tissue or primary human hepatocytes, which raises concerns about species specificity. However, the generation of hepatocytes from induced pluripotent stem cells may give rise to an unlimited resource of human material (Chen et al. 2012; Medine et al. 2010; Takayama et al. 2012; Tralau and Luch 2012; Wongkajornsilp et al. 2012). We will not discuss the individual pros and cons of these systems here, but we will assume that such systems will be integrated into routine in vitro testing. One of the most important issues for an integrated model is how the resulting metabolites are transferred to the next assay. The use of S9 extracts has proven itself to be problematic in cell culture, and likewise the supernatant of liver-cell culture (Hettwer et al. 2010). At the moment, these issues are going to be addressed, for example, by the coupling of assays via suitable metabolically competent organotypic cultures (Sonntag et al. 2010; Sung and Shuler 2010).
In the next step, the chemical and its metabolites (if any) need to be tested for their tissue-barrier mobility. In the case of impermeable substances, the exclusion of whole organs or tissues from testing will help to minimize the need for in vitro testing and avoid false-positive results. Again, the application of PBTK modeling allows for the prediction of realistic concentrations and doses and helps to prioritize subsequent testing. When metabolism is known, PBTK modeling can even model the homeostasis of whole organs (Subramanian et al. 2008). Likewise, simple assays with false-negative rates close to zero could be used to prioritize chemicals for more involved organotypic assays, even if the initial screening assays have high false-positive rates.
If the primary target of chronic exposure and toxicity is the liver, the second most prevalent target is the kidneys, followed by the reproductive organs, the brain, hematopoietic tissues, and bone. Cell culture models are available for most of these organs, either as immortal cell lines, primary cultures, reprogrammed stem cells, or even organotypic cultures (i.e., Peljto and Wichterle 2011; Wobus and Löser 2011). For other organs, miniaturized chips, such as the “lung on a chip” (Huh et al. 2010), can be used to measure cellular reactions under physiological conditions. All of these systems allow the detection of necrotic and apoptotic cell death. Nevertheless, a major challenge is the reliable detection of carcinogenic and mutagenic events as well as developmental defects. Many of the underlying molecular pathways are known and each year we learn more about the respective key molecules. For many of these pathways, biochemical and cellular assays are available, as are reporter cell lines (reviewed by Schenk et al. 2010). A reliable combined molecular testing strategy, however, is usually missing, because we still do not understand the key events well enough. At the same time, it appears that “omics” approaches can be used to identify toxic signature patterns within cellular metabolic pathways in vitro (Winkler et al. 2009). Such an approach would not only increase predictivity and be suitable for high-throughput screening, but would also allow simultaneous measurement of multiple end points.
What are the challenges of an in vitro approach, and what performance can we realistically expect? We will try to elucidate these questions using the example of a putative herbicide that will turn out to be a neurotoxicant after metabolic conversion. First, a skin-barrier model would be used to determine the amount of herbicide that reaches the blood after dermal exposure. Next, a model for the toxicant’s distribution in blood, such as a PBTK model, would be used to estimate the concentration reaching the liver. The liver model must then be used to metabolize the agent, and the resulting metabolite (or metabolites) would be applied to a set of organ-mimicking in vitro systems, including a model of the blood–brain barrier. Finally, a brain model would be exposed to the molecules that are capable of crossing the blood–brain barrier. For the sake of our example, we assume that only the brain model shows an adverse response, and that parallel assays (e.g., assays for liver and spleen toxicity) do not contribute to the detection of our assumed neurotoxin. For good predictivity, we also assumed that all models are composed of nearly all cell types representing the modeled organ, for instance, by appropriate differentiation of human induced pluripotent stem cells. Altogether, five modeled steps would be required for hazard identification. If we assign a worst-case predictivity of 75% to each step, the total predictivity would be about 24%. To reach the 60% predictivity of animal models toward human toxicants, each in vitro step would have to exhibit > 90% predictivity, and to achieve 95% overall predictivity, each individual assay would have to perform better than 99%. Parallel assays in other organ models that are not mentioned here would not affect this calculation. However, they would have to exhibit the same predictivity to yield a similar overall predictivity in different scenarios.
At first analysis, these requirements look like a formidable challenge. However, in combination with metabolomics and transcriptomics approaches, current in vitro models already tend to reach 80–95% predictivity, sometimes even more. Gene arrays have already been used to predict liver damage in primary rat hepatocytes with 91% sensitivity and 88% specificity (Dai et al. 2006). Similarly, a recent proof-of-concept study used transcriptomic analysis to identify chemical carcinogens in hepatocyte-like cells derived from human embryonic stem cells. The overall accuracy of this system came close to 96% (Yildirimman et al. 2011). Metabolomic analysis of human WA09 embryonic stem cells identified teratogenic substances, including thalidomide, with 88% predictivity (Kleinstreuer et al. 2011). Likewise the combination of read-across with several quantitative structure–activity relationship (QSAR) models allowed Hewitt et al. (2010) to reach 89% predictivity for developmental toxicity. Even for the notorious non-genotoxic carcinogens, toxicogenomic approaches reach a predictivity of up to 80%, which is superior to the classic rodent–cancer bioassay (Fielden et al. 2011; Liu et al. 2011; Low et al. 2011). Thus, for our suggested Tox-Test Dummy, it seems realistic to expect an overall predictivity of 51–86% based on current assays, although predictivity would be higher for common scenarios that do not involve neurotoxicity, and therefore would require only four steps. Moreover, this calculation does not account for the additional benefits that would result from the use of human cells and the integration of several organ models onto a single chip (Esch et al. 2011; Huh et al. 2010). In a recent proof-of-principle study, Prot et al. (2012) recapitulated major aspects of acetaminophen hepatotoxicity on a biochip.”