Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a BioOne web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact helpdesk@bioone.org with any questions.
The last decade has witnessed a revolution in the clinical application of high-dose “ablative” radiation therapy. Initially this approach was limited to the treatment of brain tumors, but more recently we have seen its successful extension to tumors outside the brain, e.g., for small lung nodules. These advances have been driven largely by improvements in image-guided inverse treatment planning that allow the dose per fraction to the tumor to be increased over the conventional 2 Gy dose while keeping the late normal tissue complications at an acceptable level by dose limitation. Despite initial concerns about excessive late complications, as might be expected based on dose extrapolations using the linear-quadratic equation, these approaches have shown considerable clinical promise. Our knowledge of the biological consequences of high-doses of ionizing radiation in normal and cancerous tissues has lagged behind these clinical advances. Our intent here is to survey recent experimental findings from the perspective of better understanding the biological effects of high-dose therapy and whether they are truly different from conventional doses. We will also consider the implications of this knowledge for further refining and improving these approaches on the basis of underlying mechanisms.
In contrast to the classic view of static DNA double-strand breaks (DSBs) being repaired at the site of damage, we hypothesize that DSBs move and merge with each other over large distances (μm). As X-ray dose increases, the probability of having DSB clusters increases as does the probability of misrepair and cell death. Experimental work characterizing the X-ray dose dependence of radiation-induced foci (RIF) in nonmalignant human mammary epithelial cells (MCF10A) is used here to validate a DSB clustering model. We then use the principles of the local effect model (LEM) to predict the yield of DSBs at the submicron level. Two mechanisms for DSB clustering, namely random coalescence of DSBs versus active movement of DSBs into repair domains are compared and tested. Simulations that best predicted both RIF dose dependence and cell survival after X-ray irradiation favored the repair domain hypothesis, suggesting the nucleus is divided into an array of regularly spaced repair domains of ∼1.55 μm sides. Applying the same approach to high-linear energy transfer (LET) ion tracks, we are able to predict experimental RIF/μm along tracks with an overall relative error of 12%, for LET ranging between 30–350 keV/μm and for three different ions. Finally, cell death was predicted by assuming an exponential dependence on the total number of DSBs and of all possible combinations of paired DSBs within each simulated RIF. Relative biological effectiveness (RBE) predictions for cell survival of MCF10A exposed to high-LET showed an LET dependence that matches previous experimental results for similar cell types. Overall, this work suggests that microdosimetric properties of ion tracks at the submicron level are sufficient to explain both RIF data and survival curves for any LET, similarly to the LEM assumption. Conversely, high-LET death mechanism does not have to infer linear-quadratic dose formalism as done in the LEM. In addition, the size of repair domains derived in our model are based on experimental RIF and are three times larger than the hypothetical LEM voxel used to fit survival curves. Our model is therefore an alternative to previous approaches that provides a testable biological mechanism (i.e., RIF). In addition, we propose that DSB pairing will help develop more accurate alternatives to the linear cancer risk model (LNT) currently used for regulating exposure to very low levels of ionizing radiation.
During space travel, astronauts are exposed to cosmic radiation that is comprised of high-energy nuclear particles. Cancer patients are also exposed to high-energy nuclear particles when treated with proton and carbon beams. Nuclear interactions from high-energy particles traversing shielding materials and tissue produce low-energy (<10 MeV/n) secondary particles of high-LET that contribute significantly to overall radiation exposures. Track structure theories suggest that high charge and energy (HZE) particles and low-energy secondary ions of similar LET will have distinct biological effects for cellular and tissue damage endpoints. We investigated the biological effects of low-energy ions of high LET utilizing the Tandem Van de Graaff accelerator at the Brookhaven National Laboratory (BNL), and compared these to experiments with HZE particles, that mimic the space environment produced at NASA Space Radiation Laboratory (NSRL) at BNL. Immunostaining for DNA damage response proteins was carried out after irradiation with 5.6 MeV/n boron (LET 205 keV/μm), 5.3 MeV/n silicon (LET 1241 keV/μm), 600 MeV/n Fe (LET 180 keV/μm) and 77 MeV/n oxygen (LET 58 keV/μm) particles. Low-energy ions caused more persistent DNA damage response (DDR) protein foci in irradiated human fibroblasts and esophageal epithelial cells compared to HZE particles. More detailed studies comparing boron ions to Fe particles, showed that boron-ion radiation resulted in a stronger G2 delay compared to Fe-particle exposure, and boron ions also showed an early recruitment of Rad51 at double-strand break (DSB) sites, which suggests a preference of homologous recombination for DSB repair in low-energy albeit high-LET particles. Our experiments suggest that the very high-energy radiation deposition by low-energy ions, representative of galactic cosmic radiation and solar particle event secondary radiation, generates massive but localized DNA damage leading to delayed DSB repair, and distinct cellular responses from HZE particles. Thus, low-energy heavy ions provide a valuable probe for studies of homologous recombination repair in radiation responses.
Previous ground-based experiments have shown that cranial irradiation with mission relevant (20 cGy) doses of 1 GeV/nucleon 56Fe particles leads to a significant impairment in Attentional Set Shifting (ATSET) performance, a measure of executive function, in juvenile Wistar rats. However, the use of head only radiation exposure and the biological age of the rats used in that study may not be pertinent to determine the likelihood that ATSET will be impaired in Astronauts on deep space flights. In this study we have determined the impact that whole-body exposure to 10, 15 and 20 cGy of 1 GeV/nucleon 56Fe particles had on the ability (at three months post exposure) of socially mature (retired breeder) Wistar rats to conduct the attentional set-shifting paradigm. The current study has established that whole-body exposures to 15 and 20 (but not 10) cGy of 1 GeV/nucleon 56Fe particles results in the impairment of ATSET in both juvenile and socially mature rats. However, the exact nature of the impaired ATSET performance varied depending upon the age of the rats, whether whole-body versus cranial irradiation was used and the dose of 1 GeV/u 56Fe received. Exposure of juvenile rats to 20 cGy of 1 GeV/nucleon 56Fe particles led to a decreased ability to perform intra-dimensional shifting (IDS) irrespective of whether the rats received head only or whole-body exposures. Juvenile rats that received whole-body exposure also had a reduced ability to habituate to the assay and to complete intra-dimensional shifting reversal (IDR), whereas juvenile rats that received head only exposure had a reduced ability to complete compound discrimination reversal (CDR). Socially mature rats that received whole-body exposures to 10 cGy of 1 GeV/nucleon 56Fe particles exhibited no obvious decline in set-shifting performance; however those exposed to 15 and 20 cGy had a reduced ability to perform simple discrimination (SD) and compound discrimination (CD). Exposure to 20 cGy of 1 GeV/nucleon 56Fe particles also led to a decreased performance in IDR and to ∼25% of rats failing to habituate to the task. Most of these rats started to dig for the food reward but rapidly (within 15 s) gave up digging, suggesting that they had developed appropriate procedural memories about food retrieval, but had an inability to maintain attention on the task. Our preliminary data suggests that whole-body exposure to 20 cGy of 1 GeV/nucleon 56Fe particles reduced the cholinergic (but not the GABAergic) readily releasable pool (RRP) in nerve terminals of the basal forebrain from socially-mature rats. This perturbation of the cholinergic RRP could directly lead to the loss of CDR and IDR performance, and indirectly [through the metabolic changes in the medial prefrontal cortex (mPFC)] to the loss of SD and CD performance. These findings provide the first evidence that attentional set-shifting performance in socially mature rats is impaired after whole-body exposure to mission relevant doses (15 and 20 cGy) of 1 GeV/nucleon 56Fe particles, and importantly that a dose reduction down to 10 cGy prevents that impairment. The ability to conduct Discrimination tasks (SD and CD) and reversal learning (CDR) is reduced after exposure to 15 and 20 cGy of 1 GeV/nucleon 56Fe particles, but at 20 cGy there is an additional decrement, ∼ 25% of rats are unable to maintain attention to task. These behavioral decrements are associated with a reduction in the cholinergic RRP within basal forebrain, which has been shown to play a major role in regulating the activity of the PFC.
Michael Abend, Tamara Azizova, Kerstin Müller, Harald Dörr, Sven Senf, Helmut Kreppel, Galina Rusinova, Irina Glazkova, Natalia Vyazovskaya, Kristian Unger, Viktor Meineke
We evaluated gene expression in the peripheral blood of Mayak workers in relationship to occupational chronic exposure to identify permanent post-exposure signatures. The Mayak workers had experienced either a combined exposure to incorporated 239Pu and external gamma rays (n = 82) or exposure to external gamma rays (n = 18). Fifty unexposed individuals served as controls. Peripheral blood was collected and then the RNA was isolated, converting it into cDNA and stored at −20°C. In a previous study at stage I, we screened the mRNA and microRNA transcriptome using 40 of the 150 samples and identified 95 mRNAs and 45 microRNAs. In stage II of this study, we now validated our 140 candidate genes using the qRT-PCR technique for the remaining 92 blood samples (18 samples were lost due to methodological reasons). We analyzed associations of normalized gene expression values in linear models separately for both exposure types (continuous and categorical scales) and adjusted for exposure age as well as stratified by gender. After further adjustment for confounders such as chronic non-cancer diseases or age at biosampling, mostly binary (on/off) dose-to-gene relationships were found for 15 mRNAs and 15 microRNAs, of which 8 mRNAs and 6 microRNAs remained significant after Bonferroni correction. Almost all of them were associated with plutonium incorporation and gender. Our study provides mRNA and microRNA gene expression changes dependent on the exposure type and gender, which occur and seem to persist after chronic radiation exposures supporting the concept of permanent post-exposure signatures.
Paula C. Genik, Irina Vyazunova, Leta S. Steffen, Jeffery W. Bacher, Helle Bielefeldt-Ohmann, Scott McKercher, Robert L. Ullrich, Christina M. Fallgren, Michael M. Weil, F. Andrew Ray
Most murine radiation-induced acute myeloid leukemias involve biallelic inactivation of the PU.1 gene, with one allele being lost through a radiation-induced chromosomal deletion and the other allele affected by a recurrent point mutation in codon 235 that is likely to be spontaneous. The short latencies of acute myeloid leukemias occurring in nonirradiated mice engineered with PU.1 conditional knockout or knockdown alleles suggest that once both copies of PU.1 have been lost any other steps involved in leukemogenesis occur rapidly. Yet, spontaneous acute myeloid leukemias have not been reported in mice heterozygous for a PU.1 knockout allele, an observation that conflicts with the understanding that the PU.1 codon 235 mutation is spontaneous. Here we describe experiments that show that the lack of spontaneous leukemia in PU.1 heterozygous knockout mice is not due to insufficient monitoring times or mouse numbers or the genetic background of the knockout mice. The results reveal that spontaneous leukemias that develop in mice of the mixed 129S2/SvPas and C57BL/6 background of knockout mice arise by a pathway that does not involve biallelic PU.1 mutation. In addition, the latency of radiation-induced leukemia in PU.1 heterozygous mice on a genetic background susceptible to radiation-induced leukemia indicates that the codon 235 mutation is not a rate-limiting step in radiation leukemogenesis driven by PU.1 loss.
Radiation injury to skin poses substantial morbidity risks in the curative treatment of cancers and is also of concern in the context of radiological attack or nuclear accident scenarios. Late effects can be severe and are frequently characterized by subcutaneous fibrosis and morbidity. These experiments presented here assess the potential of MW01-2-151SRM (MW-151), a novel small-molecule inhibitor of microglial activation and associated proinflammatory cytokine/chemokine production, as a mitigator of radiation-induced skin injury. Groups of C57BL/6 mice received focal irradiation of the right hind leg at a dose of 30 Gy. Therapy was initiated either on day 3, day 7 or day 14 postirradiation and maintained subsequently for 21 days by intraperitoneal injections administered three times per week. The primary end point was skin injury, which was assessed three times a week for at least 60 days postirradiation and scored using a semi-quantitative scale. Secondary end points measured at selected times included histology (primarily H&E) and immunofluorescence labeling of various macrophage (F4-80) and inflammatory (TGF-β, TNF-α, MMP9) markers. Relative to untreated controls, mitigation of radiation-induced skin injury in mice receiving MW-151 was highly dependent on the timing of therapy initiation. Initiation on day 3 postirradiation had no discernable effect, whereas mitigating effects were maximal following initiation on day 7 and present to a lesser degree following initiation on day 14. The response to MW-151 therapy in individual animals was essentially all-or-none and the relative benefits associated with the timing of therapy initiation primarily reflected differences in the number of responders. These data support the hypothesis that proinflammatory cytokines/chemokines play complex roles in orchestrating the response to radiation-induced skin injury and suggest that there is a critical period during which they initiate the pathogenesis resulting in late effects.
One of the main issues of low-energy internal emitters concerns the very short ranges of the beta particles, versus the dimensions of the biological targets. Depending on the chemical form, the radionuclide may be more concentrated either in the cytoplasm or in the nucleus of the target cell. Consequently, since in most cases conventional dosimetry neglects this issue it may overestimate or underestimate the dose to the nucleus and hence the biological effects. To assess the magnitude of these deviations and to provide a realistic evaluation of the localized energy deposition by low-energy internal emitters, the biophysical track-structure code PARTRAC was used to calculate nuclear doses, DNA damage yields and fragmentation patterns for different localizations of radionuclides in human interphase fibroblasts. The nuclides considered in the simulations were tritium and nickel-63, which emit electrons with average energies of 5.7 (range in water of 0.42 μm) and 17 keV (range of 5 μm), respectively, covering both very short and medium ranges of beta-decay products. The simulation results showed that the largest deviations from the conventional dosimetry occur for inhomogeneously distributed short-range emitters. For uniformly distributed radionuclides selectively in the cytoplasm but excluded from the cell nucleus, the dose in the nucleus is 15% of the average dose in the cell in the case of tritium but 64% for nickel-63. Also, the numbers of double-strand breaks (DSBs) and the distributions of DNA fragments depend on subcellular localization of the radionuclides. In the low- and medium-dose regions investigated here, DSB numbers are proportional to the nuclear dose, with about 50 DSB/Gy for both studied nuclides. In addition, DSB numbers on specific chromosomes depend on the radionuclide localization in the cell as well, with chromosomes located more peripherally in the cell nucleus being more damaged by short-ranged emitters in cytoplasm compared with chromosomes located more centrally. These results illustrate the potential for over- or underestimating the risk associated with low-energy emitters, particularly for tritium intake, when their distribution at subcellular levels is not appropriately considered.
While protracting exposures of low-LET radiations usually leads to a reduction in their effectiveness for a given dose, for high-LET radiation there is now substantial evidence for what has been called an inverse dose-rate effect, where under certain circumstances there is an increase in carcinogenesis or other biological effects, with decreasing dose rate. This study investigates the influence of dose rate on the induction of chromosome aberrations and gene mutations after irradiation of plateau phase V79-4 cells with high-LET alpha particles. The induction of chromosomal aberrations exhibited a linear relationship with dose and showed evidence of a small but significant conventional dose-rate dependence, with low-dose-rate exposures (0.28 Gy h−1) being less effective by about 20% (ratio 0.82 ± 0.04) compared to acute exposures. However no significant dose-rate effect was observed for cell survival or the induction of mutations in the HPRT gene for low-dose-rate exposure (8.0 × 10−5 and 1.5 × 10−2 Gy h−1 for exposure of 0.36 and 0.69 Gy, respectively) when compared to acute exposures.
DNA double-strand breaks (DSBs) induced by ionizing radiation pose a major threat to cell survival. The cell can respond to the presence of DSBs through two major repair pathways: homologous recombination (HR) and nonhomologous end joining (NHEJ). Higher levels of cell death are induced by high-linear energy transfer (LET) radiation when compared to low-LET radiation, even at the same physical doses, due to less effective and efficient DNA repair. To clarify whether high-LET radiation inhibits all repair pathways or specifically one repair pathway, studies were designed to examine the effects of radiation with different LET values on DNA DSB repair and radiosensitivity. Embryonic fibroblasts bearing repair gene (NHEJ-related Lig4 and/or HR-related Rad54) knockouts (KO) were used and their responses were compared to wild-type cells. The cells were exposed to X rays, spread-out Bragg peak (SOBP) carbon ion beams as well as with carbon, iron, neon and argon ions. Cell survival was measured with colony-forming assays. The sensitization enhancement ratio (SER) values were calculated using the 10% survival dose of wild-type cells and repair-deficient cells. Cellular radiosensitivity was listed in descending order: double-KO cells > Lig4-KO cells > Rad54-KO cells > wild-type cells. Although Rad54-KO cells had an almost constant SER value, Lig4-KO cells showed a high-SER value when compared to Rad54-KO cells, even with increasing LET values. These results suggest that with carbon-ion therapy, targeting NHEJ repair yields higher radiosensitivity than targeting homologous recombination repair.
Low-linear energy transfer (low-LET) γ-ray exposure is a risk factor for colorectal cancer (CRC). Due to their high-LET nature, energetic iron ions found in space are expected to pose greater CRC risks to astronauts undertaking long-duration space missions beyond low Earth orbit. Wild-type p53-induced phosphatase 1 (Wip1) is important for cellular DNA damage response and its abrogation has been shown to inhibit spontaneous intestinal tumorigenesis in APCMin/ mice, a well-studied mouse model of human CRC. However, the relationship of Wip1 to radiation-induced intestinal tumorigenesis, especially by energetic iron ions, has not been investigated in APCMin/ mice. We have previously reported that there is a greater intestinal tumorigenic potential of iron-ion radiation relative to 137Cs γ rays, so the purpose of the current study was to investigate whether Wip1 abrogation could influence high-LET dependent intestinal tumorigenesis in APCMin/ mice. Intestinal tumor frequency and grade were assessed in APCMin/ /Wip1–/– mice and results were compared to those in APCMin/ /Wip1 / mice after exposure to a mean absorbed dose of 2 Gy from 137Cs γ rays or 1.6 Gy from 1 GeV/n iron ions. Cellular differentiation and proliferation were also assessed in the intestinal tumors of sham-irradiated and irradiated mice. Decreased tumor frequency and lower tumor grade were observed in APCMin/ /Wip1–/– relative to APCMin/ /Wip1 / mice. Notably, a similar decrease (∼6-fold in both groups) in tumor number was observed in sham-irradiated and γ-irradiated APCMin/ /Wip1–/– relative to APCMin/ /Wip1 / mice. However, tumorigenesis in the energetic iron-ion exposed group was reduced ∼8-fold in APCMin/ /Wip1–/– relative to APCMin/ /Wip1 / mice. A significantly lower proliferation/differentiation index in tumors of iron-ion exposed APCMin/ /Wip1–/– relative to APCMin/ /Wip1 / mice suggests that reduced proliferation and enhanced differentiation as a result of Wip1 abrogation maybe involved. In conclusion, the current study demonstrated that the absence of Wip1 blocked radiation-induced intestinal tumorigenesis irrespective of radiation quality and has implications for developing preventive strategies against the tumorigenic potential of radiation exposure on earth and in outer space.
Radiation fibrosis of the lung is a late toxicity of thoracic irradiation. Epidermal growth factor (EGF) signaling has previously been implicated in radiation lung injury. We hypothesized that TGF-α, an EGF receptor ligand, plays a key role in radiation-induced fibrosis in lung. Mice deficient in transforming growth factor (TGF-α–/–) and control C57Bl/6J (C57-WT) mice were exposed to thoracic irradiation in 5 daily fractions of 6 Gy. Cohorts of mice were followed for survival (n ≥ 5 per group) and tissue collection (n = 3 per strain and time point). Collagen accumulation in irradiated lungs was assessed by Masson's trichrome staining and analysis of hydroxyproline content. Cytokine levels in lung tissue were assessed with ELISA. The effects of TGF-α on pneumocyte and fibroblast proliferation and collagen production were analyzed in vitro. Lysyl oxidase (LOX) expression and activity were measured in vitro and in vivo. Irradiated C57-WT mice had a median survival of 24.4 weeks compared to 48.2 weeks for irradiated TGF-α–/– mice (P = 0.001). At 20 weeks after irradiation, hydroxyproline content was markedly increased in C57-WT mice exposed to radiation compared to TGF-α–/– mice exposed to radiation or unirradiated C57-WT mice (63.0, 30.5 and 37.6 μg/lung, respectively, P = 0.01). C57-WT mice exposed to radiation had dense foci of subpleural fibrosis at 20 weeks after exposure, whereas the lungs of irradiated TGF-α –/– mice were largely devoid of fibrotic foci. Lung tissue concentrations of IL-1β, IL-4, TNF-α, TGF-β and EGF at multiple time points after irradiation were similar in C57-WT and TGF-α–/– mice. TGF-α in lung tissue of C57-WT mice rose rapidly after irradiation and remained elevated through 20 weeks. TGF-α–/– mice had lower basal LOX expression than C57-WT mice. Both LOX expression and LOX activity were increased after irradiation in all mice but to a lesser degree in TGF-α–/– mice. Treatment of NIH-3T3 fibroblasts with TGF-α resulted in increases in proliferation, collagen production and LOX activity. These studies identify TGF-α as a critical mediator of radiation-induced lung injury and a novel therapeutic target in this setting. Further, these data implicate TGF-α as a mediator of collagen maturation through a TGF-β independent activation of lysyl oxidase.
This article is only available to subscribers. It is not available for individual sale.
Access to the requested content is limited to institutions that have
purchased or subscribe to this BioOne eBook Collection. You are receiving
this notice because your organization may not have this eBook access.*
*Shibboleth/Open Athens users-please
sign in
to access your institution's subscriptions.
Additional information about institution subscriptions can be foundhere