Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a BioOne web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact helpdesk@bioone.org with any questions.
Ravi S. Misra, Carl J. Johnston, Angela M. Groves, Marta L. DeDiego, Joe St. Martin, Christina Reed, Eric Hernady, Jen-nie Miller, Tanzy Love, Jacob N. Finkelstein, Jacqueline P. Williams
A number of investigators have suggested that exposure to low-dose radiation may pose a potentially serious health risk. However, the majority of these studies have focused on the short-term rather than long-term effects of exposure to fixed source radiation, and few have examined the effects of internal contamination. Additionally, very few studies have focused on exposure in juveniles, when organs are still developing and could be more sensitive to the toxic effects of radiation. To specifically address whether early-life radiation injury may affect long-term immune competence, we studied 14-day-old juvenile pups that were either 5 Gy total-body irradiated or injected internally with 50 μCi soluble 137Cs, then infected with influenza A virus at 26 weeks after exposure. After influenza infection, all groups demonstrated immediate weight loss. We found that externally irradiated, infected animals failed to recover weight relative to age-matched infected controls, but internally 137Cs contaminated and infected animals had a weight recovery with a similar rate and degree as controls. Externally and internally irradiated mice demonstrated reduced levels of club cell secretory protein (CCSP) message in their lungs after influenza infection. The externally irradiated group did not recover CCSP expression even at the two-week time point after infection. Although the antibody response and viral titers did not appear to be affected by either radiation modality, there was a slight increase in monocyte chemoattractant protein (MCP)-1 expression in the lungs of externally irradiated animals 14 days after influenza infection, with increased cellular infiltration present. Notably, an increase in the number of regulatory T cells was seen in the mediastinal lymph nodes of irradiated mice relative to uninfected mice. These data confirm the hypothesis that early-life irradiation may have long-term consequences on the immune system, leading to an altered antiviral response.
There is a need for minimally invasive biomarkers that can accurately and quickly quantify radiation exposure. Radiation-responsive proteins have applications in clinical medicine and for mass population screenings after a nuclear or radiological incident where the level of radiation exposure and exposure pattern complicate medical triage for first responders. In this study, we evaluated the efficacy of the acute phase protein serum amyloid A (SAA) as a biomarker for radiation exposure using plasma from irradiated mice. Ten-week-old female C57BL6 mice received a 1–8 Gy single whole-body or partial-body dose from a Pantak X-ray source at a dose rate of 2.28 Gy/min. Plasma was collected by mandibular or cardiac puncture at 6, 24, 48 and 72 h or 1–3 weeks postirradiation. SAA levels were determined using a commercially available ELISA assay. Data was pooled to generate SAA μg/ml threshold values correlating plasma SAA levels with radiation dose. SAA levels were statistically significant over control at all exposures between 2 and 8 Gy at 24 h postirradiation but not at 6, 48 and 72 h or 1–3 weeks postirradiation. SAA levels at 1 Gy were not significantly elevated over control at all time points. Total-body-irradiated (TBI) SAA levels at 24 h were used to generate a dose prediction model that successfully differentiated TBI mice into dose received cohorts of control/1 Gy and ≥2 Gy groups with a high degree of accuracy in a blind study. Dose prediction of partial-body exposures based on the TBI model correlated increasing predictive accuracy with percentage of body exposure to radiation. Our findings indicate that plasma SAA levels might be a useful biomarker for radiation exposure in a variety of total- and partial-body irradiation settings.
In the event of a nuclear accident or radiological terrorist attack, there will be a pressing need for biodosimetry to triage a large, potentially exposed population and to assign individuals to appropriate treatment. Exposures from fallout are likely, resulting in protracted dose delivery that would, in turn, impact the extent of injury. Biodosimetry approaches that can distinguish such low-dose-rate (LDR) exposures from acute exposures have not yet been developed. In this study, we used the C57BL/6 mouse model in an initial investigation of the impact of low-dose-rate delivery on the transcriptomic response in blood. While a large number of the same genes responded to LDR and acute radiation exposures, for many genes the magnitude of response was lower after LDR exposures. Some genes, however, were differentially expressed (P < 0.001, false discovery rate <5%) in mice exposed to LDR compared with mice exposed to acute radiation. We identified a set of 164 genes that correctly classified 97% of the samples in this experiment as exposed to acute or LDR radiation using a support vector machine algorithm. Gene expression is a promising approach to radiation biodosimetry, enhanced greatly by this first demonstration of its potential for distinguishing between acute and LDR exposures. Further development of this aspect of radiation biodosimetry, either as part of a complete gene expression biodosimetry test or as an adjunct to other methods, could provide vital triage information in a mass radiological casualty event.
High linear energy transfer (LET) α particles are important with respect to the carcinogenic risk associated with human exposure to ionizing radiation, most notably to radon and its progeny. Additionally, the potential use of alpha-particle-emitting radionuclides in radiotherapy is increasingly being explored. Within the body the emitted alpha particles slow down, traversing a number of cells with a range of energies and therefore with varying efficiencies at inducing biological response. The LET of the particle typically rises from between ~70–90 keV μm−1 at the start of the track (depending on initial energy) to a peak of ~237 keV μm−1 towards the end of the track, before falling again at the very end of its range. To investigate the variation in biological response with incident energy, a plutonium-238 alpha-particle irradiator was calibrated to enable studies with incident energies ranging from 4.0 MeV down to 1.1 MeV. The variation in clonogenic survival of V79-4 cells was determined as a function of incident energy, along with the relative variation in the initial yields of DNA double-strand breaks (DSB) measured using the FAR assay. The clonogenic survival data also extends previously published data obtained at the Medical Research Council (MRC), Harwell using the same cells irradiated with helium ions, with energies ranging from 34.9 MeV to 5.85 MeV. These studies were performed in conjunction with cell morphology measurements on live cells enabling the determination of absorbed dose and calculation of the average LET in the cell. The results show an increase in relative biological effectiveness (RBE) for cell inactivation with decreasing helium ion energy (increasing LET), reaching a maximum for incident energies of ~3.2 MeV and corresponding average LET of 131 keV μm−1, above which the RBE is observed to fall at lower energies (higher LETs). The effectiveness of single alpha-particle traversals (relevant to low-dose exposure) at inducing cell inactivation was observed to increase with decreasing energy to a peak of ~68% survival probability for incident energies of ~1.8 MeV (average LET of 190 keV μm−1) producing ~0.39 lethal lesions per track. However, the efficiency of a single traversal will also vary significantly with cell morphology and angle of incidence, as well as cell type.
In a mass casualty radiation event situation, individualized therapy may overwhelm available resources and feasibility issues suggest a need for the development of population-based strategies. To investigate the efficacy of a population-based strategy, Chinese macaques (n = 46) underwent total-body irradiation and received preemptive antibiotics, IV hydration on predetermined postirradiation days and were then compared to macaques (n = 48) that received subject-based care in which blood transfusions, IV hydration, nutritional supplementation and antibiotic supportive measures were provided. Estimated radiation doses for LD30/60, LD50/60 and LD70/60 of animals with subject-based care: 6.83 Gy (6.21, 7.59), 7.44 Gy (6.99, 7.88) and 8.05 Gy (7.46, 8.64), respectively, and for population-based care: 5.61 Gy (5.28, 6.17), 6.62 Gy (6.13, 7.18) and 7.63 Gy (7.21, 8.20), respectively. Analysis of four time periods, 0–9, 10–15, 16–25 and 26–60 days postirradiation, identified significant mortality differences during the period of 10–15 days. A subset analysis of higher radiation doses (6.75–7.20 Gy, n = 32) indicated hydration, nutrition and septic status were not significantly different between treatments. Whole blood transfusion treatment, administered only in subject-supportive care, was associated with significantly higher platelet and absolute neutrophil counts. Median platelet counts greater than 5,670 cells/μl and absolute neutrophil counts greater than 26 cells/μl during this period correlated with survival. We observed that the population-based treatment increased the LD50/60 compared to nontreatment (6.62 Gy vs. 4.92 Gy) and may be further optimized during days 10–15, where strategic blood transfusions or other strategies to achieve increases in neutrophil and platelet counts may further increase survival rates in subjects exposed to high doses of radiation.
Previously reported studies of the Techa River Cohort have established associations between radiation dose and the occurrence of solid cancers and leukemia (non-CLL) that appear to be linear in dose response. These analyses include 17,435 cohort members alive and not known to have had cancer prior to January 1, 1956 who lived in areas near the river or Chelyabinsk City at some time between 1956 and the end of 2007, utilized individualized dose estimates computed using the Techa River Dosimetry System 2009 and included five more years of follow-up. The median and mean dose estimates based on these doses are consistently higher than those based on earlier Techa River Dosimetry System 2000 dose estimates. This article includes new site-specific cancer risk estimates and risk estimates adjusted for available information on smoking. There is a statistically significant (P = 0.02) linear trend in the smoking-adjusted all-solid cancer incidence risks with an excess relative risk (ERR) after exposure to 100 mGy of 0.077 with a 95% confidence interval of 0.013–0.15. Examination of site-specific risks revealed statistically significant radiation dose effects only for cancers of the esophagus and uterus with an ERR per 100 mGy estimates in excess of 0.10. Esophageal cancer risk estimates were modified by ethnicity and sex, but not smoking. While the solid cancer rates are attenuated when esophageal cancer is removed (ERR = 0.063 per 100 mGy), a dose-response relationship is present and it remains likely that radiation exposure has increased the risks for most solid cancers in the cohort despite the lack of power to detect statistically significant risks for specific sites.
The accidental gamma radiation exposure of an industrial radiography worker and the cytogenetic examination of the worker's blood lymphocytes are described here. The exposure of the worker was due to a malfunction at the entrance into the depleted uranium-shielding device of a 192Ir source during operation. Because the source was sealed no additional beta radiation exposure was assumed. The worker's thermoluminescent dosimeter indicated an absorbed dose of 0.078 Sv, which presumably took place in December 2013. No clinical symptoms were reported in the case history after the potential exposure to radiation. Four months after the incident it was decided that biological dosimetry using dicentric chromosome and micronucleus analysis would be performed to follow radiation protection aspects and to clarify the radiation dose uncertainties for the exposed worker. Micronucleus frequency was not increased above the laboratory's control value of micronucleus background frequency of unexposed individuals. However, the observed dicentric frequency (0.003 dicentric/cell) differs significantly from the laboratory's background level of dicentric chromosomes in unexposed individuals (0.0007 dicentric/cell). Dicentric analysis in 2,048 metaphase cells resulted in an estimated dose of no more than 0.181 Gy (95% upper confidence level), not less than 0.014 Gy (95% lower confidence level) and a mean dose of 0.066 Gy (photon-equivalent whole-body exposure) based on interpolation from the laboratory's calibration curve for 60Co gamma radiation. Since overdispersion of dicentric chromosomes (u = 9.78) indicated a heterogeneous (partial-body) exposure, we applied the Dolphin method and estimated an exposure of 2.1 Sv affecting 21% of the body volume. Because the overdispersion of dicentric chromosomes was caused by only one heavily damaged cell containing two dicentrics, it is possible that this was an incidental finding. In summary, a radiation overexposure of the radiography worker must be assumed and this case considered as a potential partial-body exposure scenario.
Synchrotron radiation-Fourier transform infrared (SR-FTIR) microscopy coupled with multivariate data analysis was used as an independent modality to monitor the cellular bystander effect. Single, living prostate cancer PC-3 cells were irradiated with various numbers of protons, ranging from 50–2,000, with an energy of either 1 or 2 MeV using a proton microprobe. SR-FTIR spectra of cells, fixed after exposure to protons and nonirradiated neighboring cells (bystander cells), were recorded. Spectral differences were observed in both the directly targeted and bystander cells and included changes in the DNA backbone and nucleic bases, along with changes in the protein secondary structure. Principal component analysis (PCA) was used to investigate the variance in the entire data set. The percentage of bystander cells relative to the applied number of protons with two different energies was calculated. Of all the applied quantities, the dose of 400 protons at 2 MeV was found to be the most effective for causing significant macromolecular perturbation in bystander PC-3 cells.
The natural radiative atmospheric environment is composed of secondary cosmic rays produced when primary cosmic rays hit the atmosphere. Understanding atmospheric radiations and their dynamics is essential for evaluating single event effects, so that radiation risks in aviation and the space environment (space weather) can be assessed. In this article, we present an atmospheric radiation model, named ATMORAD (Atmospheric Radiation), which is based on GEANT4 simulations of extensive air showers according to primary spectra that depend only on the solar modulation potential (force-field approximation). Based on neutron spectrometry, solar modulation potential can be deduced using neutron spectrometer measurements and ATMORAD. Some comparisons between our methodology and standard approaches or measurements are also discussed. This work demonstrates the potential for using simulations of extensive air showers and neutron spectroscopy to monitor solar activity.
Maria Wojewodzka, Sylwester Sommer, Marcin Kruszewski, Katarzyna Sikorska, Maciej Lewicki, Halina Lisowska, Aneta Wegierek-Ciuk, Magdalena Kowalska, Anna Lankoff
Biodosimetric methods used to measure the effects of radiation are critical for estimating the health risks to irradiated individuals or populations. The direct measurement of radiation-induced γ-H2AX foci in peripheral blood lymphocytes is one approach that provides a useful end point for triage. Despite the documented advantages of the γ-H2AX assay, there is considerable variation among laboratories regarding foci formation in the same exposure conditions and cell lines. Taking this into account, the goal of our study was to evaluate the influence of different blood processing parameters on the frequency of γ-H2AX foci and optimize a small blood volume protocol for the γ-H2AX assay, which simulates the finger prick blood collection method. We found that the type of fixative, temperature and blood processing time markedly affect the results of the γ-H2AX assay. In addition, we propose a protocol for the γ-H2AX assay that may serve as a potential guideline in the event of large-scale radiation incidents.
During space travel, astronauts are exposed to a wide array of high-linear energy transfer (LET) particles, with differing energies and resulting biological effects. Risk assessment of these exposures carries a large uncertainty predominantly due to the unique track structure of the particle's energy deposition. The complex damage elicited by high charge and energy (HZE) particles results from both lesions along the track core and from energetic electrons, δ rays, generated as a consequence of particle traversal. To better define how cells respond to this complex radiation exposure, a normal hTERT immortalized skin fibroblast cell line was exposed to a defined panel of particles carefully chosen to tease out track structure effects. Phosphorylation kinetics for several key double-strand break (DSB) response proteins (γ-H2AX, pATF2 and pSMC1) were defined after exposure to ten different high-LET radiation qualities and one low-LET radiation (X ray), at two doses (0.5–2 Gy) and time points (2 and 24 h). The results reveal that the lower energy particles (Fe 300, Si 93 and Ti 300 MeV/u), with a narrower track width and higher number and intensity of δ rays, cause the highest degree of persistent damage response. The persistent γ-H2AX signal at lower energies suggests that damage from these exposures are more difficult to resolve, likely due to the greater complexity of the associated DNA lesions. However, different kinetics were observed for the solely ATM-mediated phosphorylations (pATF2 and pSMC1), revealing a shallow induction at early times and a higher level of residual phosphorylation compared to γ-H2AX. The differing phospho-protein profiles exhibited, compared to γ-H2AX, suggests additional functions for these proteins within the cell. The strong correspondence between the predicted curves for energy deposition per nucleosome for each ion/energy combination and the persistent levels of γ-H2AX indicates that the nature of energy distribution defines residual levels of γ-H2AX, an indicator of unrepaired DSBs. Our results suggest that decreasing the energy of a particle results in more complex damage that may increase genomic instability and increase the risk of carcinogenesis.
This article is only available to subscribers. It is not available for individual sale.
Access to the requested content is limited to institutions that have
purchased or subscribe to this BioOne eBook Collection. You are receiving
this notice because your organization may not have this eBook access.*
*Shibboleth/Open Athens users-please
sign in
to access your institution's subscriptions.
Additional information about institution subscriptions can be foundhere