Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a BioOne web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact helpdesk@bioone.org with any questions.
This article explores the origin of the linear no-threshold (LNT) dose-response model and how it came to be used in cancer risk assessment worldwide. Following this historical appraisal is an evaluation of the LNT model, within the framework of the BEIR VII report of the National Academy of Sciences, on the health effects of ionizing radiation. The final section of this article provides an assessment of the LNT model's capacity to make accurate predictions of risk in the low-dose zone based on recent molecular mechanistic findings and epidemiological methods, with particular emphasis on the limitations of epidemiological studies to estimate risks in the low-dose zone.
Hypoxia exists in all solid tumors and leads to clinical radioresistance and adverse prognosis. We hypothesized that hypoxia and cellular localization of gold nanoparticles (AuNPs) could be modifiers of AuNP-mediated radiosensitization. The possible mechanistic effect of AuNPs on cell cycle distribution and DNA double-strand break (DSB) repair postirradiation were also studied. Clonogenic survival data revealed that internalized and extracellular AuNPs at 0.5 mg/mL resulted in dose enhancement factors of 1.39 ± 0.07 and 1.09 ± 0.01, respectively. Radiosensitization by AuNPs was greatest in cells under oxia, followed by chronic and then acute hypoxia. The presence of AuNPs inhibited postirradiation DNA DSB repair, but did not lead to cell cycle synchronization. The relative radiosensitivity of chronic hypoxic cells is attributed to defective DSB repair (homologous recombination) due to decreased (RAD51)-associated protein expression. Our results support the need for further study of AuNPs for clinical development in cancer therapy since their efficacy is not limited in chronic hypoxic cells.
Radiation exposure due to radiological terrorism and military circumstances are a continuing threat for the civilian population. In an uncontrolled radiation event, it is likely that there will be other types of injury involved, including trauma. While radiation combined injury is recognized as an area of great significance, overall there is a paucity of information regarding the mechanisms underlying the interactions between irradiation and other forms of injury, or what countermeasures might be effective in ameliorating such changes. The objective of this study was to determine if difluoromethylornithine (DFMO) could reduce the adverse effects of single or combined injury if administered beginning 24 h after exposure. Eight-week-old C57BL/J6 young-adult male mice received whole-body cesium-137 (137Cs) irradiation with 4 Gy. Immediately after irradiation, unilateral traumatic brain injury was induced using a controlled cortical impact system. Forty-four days postirradiation, animals were tested for hippocampus-dependent cognitive performance in the Morris water maze. After cognitive testing, animals were euthanized and their brains snap frozen for immunohistochemical assessment of neuroinflammation (activated microglia) and neurogenesis in the hippocampal dentate gyrus. Our data show that single and combined injuries induced variable degrees of hippocampus-dependent cognitive dysfunction, and when given 24 h post trauma, DFMO treatment ameliorated those effects. Cellular changes including neurogenesis and numbers of activated microglia were generally not associated with the cognitive changes. Further analyses also revealed that DFMO increased hippocampal protein levels of the antioxidants thioredoxin 1 and peroxiredoxin 3 compared to vehicle treated animals. While the mechanisms responsible for the improvement in cognition after DFMO treatment are not yet clear, these results constitute a basis for further development of DFMO as a countermeasure for ameliorating the of risks for cognitive dysfunction in individuals subjected to trauma and radiation combined injury.
Interest in out-of-field radiation dose has been increasing with the introduction of new techniques, such as volumetric modulated arc therapy (VMAT). These new techniques offer superior conformity of high-dose regions to the target compared to conventional techniques, however more normal tissue is exposed to low-dose radiation with VMAT. There is a potential increase in radiobiological effectiveness associated with lower energy photons delivered during VMAT as normal cells are exposed to a temporal change in incident photon energy spectrum. During VMAT deliveries, normal cells can be exposed to the primary radiation beam, as well as to transmission and scatter radiation. The impact of low-dose radiation, radiation-induced bystander effect and change in energy spectrum on normal cells is not well understood. The current study examined cell survival and DNA damage in normal prostate cells after exposure to out-of-field radiation both with and without the transfer of bystander factors. The effect of a change in energy spectrum out-of-field compared to in-field was also investigated. Prostate cancer (LNCaP) and normal prostate (PNT1A) cells were placed in-field and out-of-field, respectively, with the PNT1A cells being located 1 cm from the field edge when in-field cells were being irradiated with 2 Gy. Clonogenic and γ-H2AX assays were performed postirradiation to examine cell survival and DNA damage. The assays were repeated when bystander factors from the LNCaP cells were transferred to the PNT1A cells and also when the PNT1A cells were irradiated in-field to a different energy spectrum. An average out-of-field dose of 10.8 ± 4.2 cGy produced a significant reduction in colony volume and increase in the number of γ-H2AX foci/cell in the PNT1A cells compared to the sham-irradiated control cells. An adaptive response was observed in the PNT1A cells having first received a low out-of-field dose and then the bystander factors. The PNT1A cells showed a significant increase in γ-H2AX foci formation when irradiated to 20 cGy in-field in comparison to out-of-field. However, no significant difference in cell survival or colony volume was observed whether the PNT1A cells were irradiated in-field or out-of-field. Out-of-field radiation dose alone can have a damaging effect on the proliferation of PNT1A cells when a clinically relevant dose of 2 Gy is delivered in in-field. Out-of-field radiation with the transfer of bystander factors induces an adaptive response in the PNT1A cells.
Steven L. Simon, Dale L. Preston, Martha S. Linet, Jeremy S. Miller, Alice J. Sigurdson, Bruce H. Alexander, Deukwoo Kwon, R. Craig Yoder, Parveen Bhatti, Mark P. Little, Preetha Rajaraman, Dunstana Melo, Vladimir Drozdovitch, Robert M. Weinstock, Michele M. Doody
In this article, we describe recent methodological enhancements and findings from the dose reconstruction component of a study of health risks among U.S. radiologic technologists. An earlier version of the dosimetry published in 2006 used physical and statistical models, literature-reported exposure measurements for the years before 1960, and archival personnel monitoring badge data from cohort members through 1984. The data and models previously described were used to estimate annual occupational radiation doses for 90,000 radiological technologists, incorporating information about each individual's employment practices based on a baseline survey conducted in the mid-1980s. The dosimetry methods presented here, while using many of the same methods as before, now estimate 2.23 million annual badge doses (personal dose equivalent) for the years 1916–1997 for 110,374 technologists, but with numerous methodological improvements. Every technologist's annual dose is estimated as a probability density function to reflect uncertainty about the true dose. Multiple realizations of the entire cohort distribution were derived to account for shared uncertainties and possible biases in the input data and assumptions used. Major improvements in the dosimetry methods from the earlier version include: A substantial increase in the number of cohort member annual badge dose measurements; Additional information on individual apron usage obtained from surveys conducted in the mid-1990s and mid-2000s; Refined modeling to develop lognormal annual badge dose probability density functions using censored data regression models; Refinements of cohort-based annual badge probability density functions to reflect individual work patterns and practices reported on questionnaires and to more accurately assess minimum detection limits; and Extensive refinements in organ dose conversion coefficients to account for uncertainties in radiographic machine settings for the radiographic techniques employed. For organ dose estimation, we rely on well-researched assumptions about critical exposure-related variables and their changes over the decades, including the peak kilovoltage and filtration typically used in conducting radiographic examinations, and the usual body location for wearing radiation monitoring badges, the latter based on both literature and national recommendations. We have derived organ dose conversion coefficients based on air-kerma weighting of photon fluences from published X-ray spectra and derived energy-dependent transmission factors for protective lead aprons of different thicknesses. Findings are presented on estimated organ doses for 12 organs and tissues: red bone marrow, female breast, thyroid, brain, lung, heart, colon, ovary, testes, skin of trunk, skin of head and neck and arms, and lens of the eye.
Incidence and mortality from cerebrovascular disease (CVD) [International Classification of Diseases 9th revision (ICD-9) codes: 430–438] was studied in a cohort of 22,377 workers first employed at the Mayak Production Association (Mayak PA) in 1948–1982 and followed up to the end of 2008. The cohort size was increased by 19% and follow-up extended by 3 years over the previous analysis. Radiation doses were estimated using an updated dosimetry system: Mayak Worker Dosimetry System 2008 (MWDS-2008). For the first time, in an analysis of this cohort, quantitative smoking data were used. Workers of the study cohort were exposed occupationally to prolonged external gamma rays and internal alpha particles. The mean (±standard deviation) total dose from external gamma rays was 0.54 ± 0.76 Gy (95% percentile 2.21 Gy) for males and 0.44 ± 0.65 Gy (95% percentile 1.87 Gy) for females. The mean plutonium body burden in the 31% of workers monitored for internal exposure was 1.32 ± 4.87 kBq (95% percentile 4.71 kBq) for males and 2.21 ± 13.24 kBq (95% percentile 4.56 kBq) for females. The mean total absorbed alpha-particles dose to the liver from incorporated plutonium was 0.23 ± 0.77 Gy (95% percentile 0.89 Gy) in males and 0.44 ± 2.11 Gy (95% percentile 1.25 Gy) in females. After adjusting for nonradiation factors (gender, age, calendar period, employment period, facility, smoking, alcohol consumption), there were significantly increasing trends in CVD incidence associated with total absorbed dose from external gamma rays and total absorbed dose to the liver from internal alpha-particle radiation exposure. Excess relative risks per Gy (ERR/Gy) were 0.46 (95% CI 0.37, 0.57) and 0.28 (95% CI 0.16, 0.42), respectively, based on a linear dose-response model. Adjustments for additional factors (hypertension, body mass index, duration of employment, smoking index and total absorbed dose to the liver from internal exposure during the analysis of external exposure and vice versa) had little effect on the results. The categorical analyses showed that CVD incidence was significantly higher among workers with total absorbed external gamma-ray doses greater than 0.1 Gy compared to those exposed to lower doses and that CVD incidence was also significantly higher among workers with total absorbed internal alpha-particle doses to the liver from incorporated plutonium greater than 0.01 Gy compared to those exposed to lower doses. The results of the categorical analyses of CVD incidence were in good agreement with a linear dose response for external gamma-ray doses but for internal alpha-particle doses the picture was less clear. For the first time an excess risk of CVD mortality was seen in workers whose livers were exposed to internal alpha-particle doses greater than 0.1 Gy compared to those workers who were exposed to doses of less than 0.01 Gy. A significant increasing trend for CVD mortality with internal alpha-particle dose was revealed in the subcohort of workers exposed at doses <1.0 Gy after having adjusted for nonradiation factors, ERR/Gy = 0.84 (95% CI, 0.09, 1.92). These updated results provide good evidence for a linear trend in risk of CVD incidence with external gamma-ray dose. The trend for CVD incidence with internal alpha-particle dose is less clear due to the impact of issues concerning the use of dose estimates based on below the limit of detection bioassay measurements.
The NIH/NIAID initiated a countermeasure program to develop mitigators for radiation-induced injuries from a radiological attack or nuclear accident. We have previously characterized and demonstrated mitigation of single organ injuries, such as radiation pneumonitis, pulmonary fibrosis or nephropathy by angiotensin converting enzyme (ACE) inhibitors. Our current work extends this research to examine the potential for mitigating multiple organ dysfunctions occurring in the same irradiated rats. Using total body irradiation (TBI) followed by bone marrow transplant, we tested four doses of X radiation (11, 11.25, 11.5 and 12 Gy) to develop lethal late effects. We identified three of these doses (11, 11.25 and 11.5 Gy TBI) that were lethal to all irradiated rats by 160 days to test mitigation by ACE inhibitors of injury to the lungs and kidneys. In this study we tested three ACE inhibitors at doses: captopril (88 and 176 mg/m2/day), enalapril (18, 24 and 36 mg/m2/day) and fosinopril (60 mg/m2/day) for mitigation. Our primary end point was survival or criteria for euthanization of morbid animals. Secondary end points included breathing intervals, other assays for lung structure and function and blood urea nitrogen (BUN) to assess renal damage. We found that captopril at 176 mg/m2/day increased survival after 11 or 11.5 Gy TBI. Enalapril at 18–36 mg/m2/day improved survival at all three doses (TBI). Fosinopril at 60 mg/m2/day enhanced survival at a dose of 11 Gy, although no improvement was observed for pneumonitis. These results demonstrate the use of a single countermeasure to mitigate the lethal late effects in the same animal after TBI.
L. Walsh, W. Zhang, R. E. Shore, A. Auvinen, D. Laurier, R. Wakeford, P. Jacob, N. Gent, L. R. Anspaugh, J. Schüz, A. Kesminiene, E. van Deventer, A. Tritscher, M. del Rosarion Pérez
We present here a methodology for health risk assessment adopted by the World Health Organization that provides a framework for estimating risks from the Fukushima nuclear accident after the March 11, 2011 Japanese major earthquake and tsunami. Substantial attention has been given to the possible health risks associated with human exposure to radiation from damaged reactors at the Fukushima Daiichi nuclear power station. Cumulative doses were estimated and applied for each post-accident year of life, based on a reference level of exposure during the first year after the earthquake. A lifetime cumulative dose of twice the first year dose was estimated for the primary radionuclide contaminants (134Cs and 137Cs) and are based on Chernobyl data, relative abundances of cesium isotopes, and cleanup efforts. Risks for particularly radiosensitive cancer sites (leukemia, thyroid and breast cancer), as well as the combined risk for all solid cancers were considered. The male and female cumulative risks of cancer incidence attributed to radiation doses from the accident, for those exposed at various ages, were estimated in terms of the lifetime attributable risk (LAR). Calculations of LAR were based on recent Japanese population statistics for cancer incidence and current radiation risk models from the Life Span Study of Japanese A-bomb survivors. Cancer risks over an initial period of 15 years after first exposure were also considered. LAR results were also given as a percentage of the lifetime baseline risk (i.e., the cancer risk in the absence of radiation exposure from the accident). The LAR results were based on either a reference first year dose (10 mGy) or a reference lifetime dose (20 mGy) so that risk assessment may be applied for relocated and non-relocated members of the public, as well as for adult male emergency workers. The results show that the major contribution to LAR from the reference lifetime dose comes from the first year dose. For a dose of 10 mGy in the first year and continuing exposure, the lifetime radiation-related cancer risks based on lifetime dose (which are highest for children under 5 years of age at initial exposure), are small, and much smaller than the lifetime baseline cancer risks. For example, after initial exposure at age 1 year, the lifetime excess radiation risk and baseline risk of all solid cancers in females were estimated to be 0.7 · 10−2 and 29.0 · 10−2, respectively. The 15 year risks based on the lifetime reference dose are very small. However, for initial exposure in childhood, the 15 year risks based on the lifetime reference dose are up to 33 and 88% as large as the 15 year baseline risks for leukemia and thyroid cancer, respectively. The results may be scaled to particular dose estimates after consideration of caveats. One caveat is related to the lack of epidemiological evidence defining risks at low doses, because the predicted risks come from cancer risk models fitted to a wide dose range (0–4 Gy), which assume that the solid cancer and leukemia lifetime risks for doses less than about 0.5 Gy and 0.2 Gy, respectively, are proportional to organ/tissue doses: this is unlikely to seriously underestimate risks, but may overestimate risks. This WHO-HRA framework may be used to update the risk estimates, when new population health statistics data, dosimetry information and radiation risk models become available.
Previous work has shown that high charge and energy particle irradiation of human cells evokes a mutagenic repair phenotype, defined by increased mutagenic repair of new double-strand breaks that are introduced enzymatically, days or weeks after the initial irradiation. The effect was seen originally with 600 MeV/u 56Fe particles, which have a linear energy transfer (LET) value of 174 keV/μm, but not with X rays or γ rays (LET ≤ 2 keV/μm). To better define the radiation quality dependence of the phenomenon, we tested two ions with intermediate LET values, 1,000 MeV/u 48Ti (LET = 108 keV/μm) and 300 MeV/u 28Si (LET = 69 keV/μm). These experiments used a previously validated assay, where a rare-cutting nuclease introduces double-strand breaks in two reporter transgene cassettes, which are located on different chromosomes. Deletions of a block of sequence in one of the cassettes, or translocations between cassettes, are measured independently using a multicolor fluorescence assay. The results showed that 48Ti was a potent, but transient, inducer of mutagenic repair, based on increased frequency of nuclease-induced translocations. The 48Ti ions did not affect the frequency of nuclease-induced deletions. The 28Si ions had no measurable effect on either endpoint. There was a close correlation between the induction of the mutagenic repair phenomenon and the frequency of micronuclei in the targeted population (R2 = 0.74), whereas there was no apparent correlation with radiation-induced cell inactivation. Together, these results better define the radiation quality dependence of the mutagenic repair phenomenon and establish its correlation, or lack of correlation, with other endpoints.
Future space missions are expected to include increased extravehicular activities (EVAs) during which astronauts are exposed to high-energy space radiation while breathing 100% oxygen. Given that brain irradiation can lead to cognitive impairment, and that oxygen is a potent radiosensitizer, there is a concern that astronauts may be at greater risk of developing cognitive impairment when exposed to space radiation while breathing 100% O2 during an EVA. To address this concern, unanesthetized, unrestrained, young adult male Fischer 344 × Brown Norway rats were allowed to breathe 100% O2 for 30 min prior to, during and 2 h after whole-body irradiation with 0, 1, 3, 5 or 7 Gy doses of 18 MV X rays delivered from a medical linear accelerator at a dose rate of ~425 mGy/min. Irradiated and unirradiated rats breathing air (~21% O2) served as controls. Cognitive function was assessed 9 months postirradiation using the perirhinal cortex-dependent novel object recognition task. Cognitive function was not impaired until the rats breathing either air or 100% O2 received a whole-body dose of 7 Gy. However, at all doses, cognitive function of the irradiated rats breathing 100% O2 was improved over that of the irradiated rats breathing air. These data suggest that astronauts are not at greater risk of developing cognitive impairment when exposed to space radiation while breathing 100% O2 during an EVA.
This article is only available to subscribers. It is not available for individual sale.
Access to the requested content is limited to institutions that have
purchased or subscribe to this BioOne eBook Collection. You are receiving
this notice because your organization may not have this eBook access.*
*Shibboleth/Open Athens users-please
sign in
to access your institution's subscriptions.
Additional information about institution subscriptions can be foundhere