The original ITERS (Harms, Cryer, & Clifford, 1990) contained 34 Items organized into 7 subscales. Each Item was presented as a 7-point Likert-type scale with four quality levels, with each level defined by a descriptive paragraph that illustrated its expected aspects of quality. The ITERS-R™ Edition (Harms, Cryer, & Clifford, 2003) consisted of 39 Items organized into 7 subscales. In the revised edition, each level of each Item was defined by numbered indicators. This change enabled assessors to more accurately assign scores and to use the measure more precisely to guide program improvement. The ITERS-R™ Updated version (Harms, Cryer, & Clifford, 2006) contained the same 39 Items and 7 subscales, but included an expanded Scoresheet and expanded notes for clarification. The updated ITERS-R™ served as the basis for the current, completely revised ITERS-3™. We have maintained the approach of using indicators that are evaluated on the basis of classroom observation, and have added and revised Items and indicators significantly to reflect current knowledge and practice in the field.
The previous versions of the ITERS applied to classrooms in which the majority of children were younger than 30 months (2.5 years). The ITERS-3™ expands the age range and is valid and appropriate for classrooms in which the majority of children are younger than 36 months, thus covering the entire birth-to-age-3 period. This expansion makes the tool a better match for the structure of most early childhood programs in the United States, which group children into year-based cohorts, and also makes the scale complementary with the ECERS-3, which applies to classrooms in which most children are older than age 3.
Scoring of the ITERS-3™ maintains the practice of scoring each set of yes/no indicators of quality and basing the 1–7 point Item scores on those indicator scores. In addition, we have maintained six of the seven subscales from the ITERS-R™, but have eliminated the Parents and Staff subscale because of limited variation in scores and the complete dependence of scoring on director or teacher reporting, rather than on observation. A total score is also calculated in the same manner as for the ITERS-R™. Finally, we now recommend that all indicators be scored beyond what is necessary to identify an Item score, in order to provide a more complete view of quality and clearer guidance on quality improvement.