Phylogenetic divergence time estimates inferred from trees optimized using maximum likelihood apply branch lengths. These branch lengths are influenced by the substitution model applied in the analysis, which can, in turn, affect divergence time estimates. To examine the effects of substitution models on divergence time estimates, we applied an empirical data set for Cornales that had 16 calibration point constraints in maximum likelihood analyses using 19 different substitution models to obtain topologies with branch lengths. Penalized likelihood was then used to obtain divergence time estimates for corresponding nodes of these topologies. Discrepancy in divergence time estimates among corresponding nodes of trees constructed with different models was small in most cases (falling within 95% confidence intervals based on the most supported model); however, we recovered instances of nodes differing by as much as 23.7% from times on corresponding nodes of the phylogeny reconstructed from our best-fit substitution model. We estimated that, on average for all nodes within a tree, divergence times differ 1.0–3.6% among the trees based on different models; however, the range of variation differs greatly among trees based on different substitution models. Discrepancy in divergence time estimates was associated with long branches, although using models similar to the best-fit model reduced this. Branches of a length within one standard deviation of mean branch lengths were an unexpected source of discrepancy regardless of the substitution model applied, although the cause of this discrepancy was unclear. We found no differences in disparity among nodes that were reconstructed in deep-, mid, or shallow-level regions of the topologies. Simulations demonstrated that use of underparameterized models affected age estimates more than use of overparameterized models. Increasing the number of calibration points can limit but not completely remove discrepancies introduced by underparameterized models.