Determine how the results from the selected student learning measures will be scaled for expected growth
In the previous steps, districts: identified student learning goals; conducted assessment inventories to find assessments that teachers can use to measure student learning; classified educators into groups to determine the set of common assessments available for different teacher types; identified which assessments and instruments will be reported at the teacher or other aggregated levels; and narrowed down the specific assessments and instruments that apply to individual teachers. In this step, the results from selected measures are scaled for expected growth. For example, any results that are used in evaluations need to be evaluated and scaled according to the learning goals or targets that are associated with them. The district will have to determine what constitutes results that are much less than expected, less than expected, expected and more than expected based on where students or groups of students began in order to be included as a measure of student learning. However, the general standard is that students at least make a year’s worth of growth in a year’s time (or for the Colorado Growth Model, they make “typical growth”).
Districts are encouraged to collaborate with stakeholder groups to construct the scoring rules, determine the weights assigned for the results specified in the system and identify the method used to combine the weighted results. Ideally, decisions about the scoring rules, weights assigned and resulting outcomes from the method applied to combine results should be informed and refined by impact data. That is, the impact of each particular method of weighting and combining measures on the outcome of teachers'/principals' evaluations should be considered before making a final decision on the method.
Because the 2013-14 school year is the first year that these requirements will be implemented, districts should consider the following and plan to make amendments based on lessons learned in subsequent years:
- Consider the use of Colorado Growth Model results as reported on the School Performance Frameworks (SPF) at each school and apply these results as collective-attribution measures for all teacher types. CDE specifically recommends the use of median growth percentiles for a given grade level, school or specific content area. This information can be obtained by using SchoolView to access the school and district growth summary reports, the Colorado Growth Model Visualization Tool, Data Center and the Data Lab.
- Encourage the use of results from statewide summative assessments, district assessments and teacher-developed assessments in developing student learning objectives as included measures for all teacher types (see Figure 1).
An approach for evaluating results for the upcoming school year is as follows.
Including Colorado Growth Model results collectively in educator evaluation
- Depending on district size and school size, districts will want to choose an approach to using growth model results. For school-level collective attribution districts may choose to use the median growth percentile as reported on the School Performance Frameworks for each available content area (reading, math and writing). Districts may also choose to use the median growth percentiles for disaggregated groups of students within a school that are also included in the SPF. Individual educators may have an MGP result for one content area, or three separate MGPs for reading, writing, and math. Note that MGPs are not to be combined into a composite MGP for multiple content areas; they are to be treated as separate measures. Table 2 presents the ratings and scores associated with the MGP ranges defined in the SFPs.
- Note that the SPF can include state summative growth results for content areas assessed in consecutive years (reading, writing and math) depending on the size of the school. Schools may also have growth results from the ACCESS assessment. If a school does not have any growth scores reported on the SPF, the district may want to consider results included in the District Performance Framework (DPF) for each content area to each teacher in the school for the 2013-2014 school year.
Table 2: Determining a rating using results from the Colorado Growth Model
Ratings Based on
Median growth percentile ranges
1st to 34th percentile
35th to 49th percentile
50th to 64th percentile
65th to 99th percentile
The school MGP for the students on the Reading TCAP was between 1 and 34
The school MGP for the students on the Reading TCAP was between 35 and 49
The school MGP for the students on the Reading TCAP was between 50 and 64
The school MGP for the students on the Reading TCAP was between 65 and 99
Note: The Colorado State Model Evaluation System will use a 0-3 point scale as illustrated in the second row. The cut points shown in this example are for use when evaluating Median Growth Percentiles.
Including results from other measures
- When selecting multiple measures for use in educator evaluation, districts can work with their educators to set targets for student learning. At the end of the course, or evaluation cycle, districts will have to evaluate the degree to which the targets set were met. (The Colorado State Model Evaluation System will use a 0-3 point scale for differentiating targets illustrated in row two of Table 3 below.)
- Districts may establish processes for educators to use the results on the selected measures to determine expected targets for different groups of students in their classroom(s) at the beginning of the class/course/grade. (Guidance on setting expected targets will be addressed in the upcoming student learning objectives resource.)
- Student performance will be evaluated relative to the expected targets set for each of the measures included. Based on the rubric criteria in the sample below, teachers can earn a possible rating of zero to three on each of the measures of student learning that are included in their evaluation.
Table 3: Rubric for rating the results on selected measures or targets
|Target Evaluation Rating Rubric|
|Ratings for Results on selected measures or targets||Much lower than expected student performance||Lower than expected student performance||Expected student performance||Higher than expected student performance|
|SAMPLE Criteria*||Less than 64 percent of students defined in the SLO meet expected targets set||65-74 percent of students defined in the SLO meet expected targets set||75-84 percent of students defined in the SLO meet expected targets set||Greater than 85 percent of students defined in the SLO meet expected targets set|
*The sample criteria in Table 3 is to illustrate how targets may be set based on the learning targets and local context within a district, school, or classroom.
CDE has identified a rating scale that has four categories, much lower than expected, lower than expected, expected and higher than expected for use in the model system. There are points associated with each rating as shown in row three of Table 3. When the points from each measure are combined, a composite score is established for each educator based on the weighted contribution of each measure that is included in their body of evidence. Combining scores will be described in the next section.