Site Index | CDE Offices | Staff Directory

* * CDE will be closed on Monday, September 1 for the Labor Day holiday. * *

Elements Comprising the Colorado Literacy Framework:

VI. Valid and Reliable Data

Valid and reliable student literacy achievement data support grantees and constituents in measuring success of initiatives.

What and Why?

Implications for Best Practice

Exemplary Practices in Action

Go back to The Elements outline

What is Valuable and Reliable Data and Why is it important?

Scientifically-based research studies in education continue to acknowledge the value of regularly assessing students’ reading progress to prevent the downward spiral of reading failure. The probability of remaining a poor reader at the end of fourth grade, given a child was a poor reader at the end of first grade, is 88% (Juel, 1988).Therefore, valid and reliable assessment data is the key to providing early identification for intervention and a plan for meeting the needs of all students identified at various levels of performance.

The traditional assessment model that has guided the educational community has been a wait-to-fail model. This model includes little to no analysis of data to create instructional groupings and ensure success.

Traditional Assessment Model

Conversely, the revised assessment model includes steps for analysis, planning for supplemental and differentiated instruction, and an ongoing monitoring of students’ performance to make instructional adjustments.

Revised Assessment Model with Data Analysis

Colorado aspires to be a state that:

  • Articulates and bridges the academic standards from grade to grade;
  • Provides leadership support and professional development so that instructional practices are robust and use data;
  • Utilizes screening, diagnostic, formative and summative achievement data and progress monitoring to ensure instruction that meets learners’ needs;
  • Engages in continuous data analysis and reflection to ensure an increase in literacy achievement.

Data-driven decision making, essential to assessing instructional practice and program effectiveness, is not possible in the absence of valid and reliable data. CDE’s Office of Standards and Assessment defines assessment validity and reliability as follows:

Validity: The extent to which a test measures what it was designed to measure. Validity is not inherent to the test, but applies to the purpose(s) for which the test is to be used. Multiple types of validity exist.

Reliability: The degree to which test scores for an individual or group of test takers are consistent over repeated administrations, and therefore, can be inferred to be dependable, replicable and relatively free of errors of measurement.

There are four kinds of assessments to be utilized when creating an assessment plan.

  1. Screening assessments are brief assessments used to identify which students have skill deficits and in which areas that deficit would prevent them from being successful. Screening assessments can be independent of the program as in DIBELS, AIMSweb, or San Diego Quick Assessment. Screening assessments can also be related to the standards, curriculum, or program from which instruction targets will be identified. They may come in the form of pretests or screening assessments offered through the program.
  2. Progress Monitoring assessments determine whether students are learning at appropriate rates and help teachers inform instruction on a regular basis. Progress monitoring assessments can be independent of the program as in DIBELS and AIMSweb or lesson assessments within the curriculum.
  3. Diagnostic assessments are conducted at any time when more information is needed in identifying a students’ instructional need due to lack of growth. They allow for clear identification of the students’ strengths and weaknesses that may prevent them from adequate growth and allows the teacher to target the instructional need. Diagnostic assessments include assessments such as DAR, DORA, SDRT4, or MAPS.
  4. Summative or Outcome assessments allow for classification of students in terms of their level of performance. These assessments show which students find success in the current structure of the school and which students do not. It shows how many of the students’ performance increases or decreases in a given year and helps identify trends to make structural changes to the district, school, or classroom. It is a powerful tool in determining what professional development is needed for staff to increase student achievement.

The plan’s four main components align and support the four types of assessments. At the school level, the four types of assessment can provide valuable support to:

  1. Identify students at the beginning of the school year who are at risk for reading difficulties and who may need extra support or intervention,
  2. Monitor student’s progress during the school year to determine whether the at-risk students are making adequate progress and to identify any other students who may be falling behind,
  3. Collect student assessment data that inform instruction, and
  4. Assess whether instruction is sufficient enough to ensure that all students achieve grade-level expectations

Back to top

Implications for Best Practice

Colorado’s Department of Education has long embraced assessment as a critical element of education reform. In 1993, Colorado introduced the CSAP (Colorado Student Assessment Program), one of the nation’s first statewide summative assessments of K-12 student learning. The state now benefits from one of the longest longitudinal assessment databases in the United States. Although CSAP provides a structure for determining program effectiveness and implementation as an outcome measure, it is not an assessment beneficial for making frequent instructional decisions on a daily basis.

Colorado has embraced the goal of using data and is being recognized for teacher use of data to guide effective teaching. Effective teachers meet regularly in building leadership teams using progress monitoring data captured every week to make instructional decisions. The duration, intensity, and purposefulness of instruction reveal the urgent nature of the work to be accomplished.

A chart, adapted from Honig, Diamond, & Gutlohn (2008), reinforces the four types of assessments, the purpose of each assessment, and when and with whom to administer the assessment. Each of these assessments is part of comprehensive system for evaluating data to inform instructional decisions.

Type of Assessment

Purpose

Administration

Screening

  • To identify students who are at risk for reading difficult and may benefit from additional support
  • To determine the most appropriate starting point for instruction
  • To elementary students, at the beginning of the school year or semester
  • To middle and high school students at the end of the previous school year

Progress Monitoring

  • Curriculum Embedded

  • General or External
  • To determine whether students are making adequate progress
  • To determine whether instruction needs to be adjusted
  • To measure the extent to which students have learned the material taught in a specific reading program
  • To measure critical reading skills (phonemic awareness, phonics, fluency, vocabulary, or comprehension) in general
  • To predict success in meeting grade level standards by the end of the year
  • To students reading at the expected level, three times a year
  • To students reading below the expected level, biweekly
  • To students reading significantly below the expected level, weekly or biweekly

Diagnostic

  • To pinpoint a student’s specific area of weakness
  • To provide in-depth information about students’ skills and instructional needs
  • Only after other forms of assessment reveal that an individual student is reading below the expected level or not making sufficient progress

Outcome

  • To provide a bottom-line evaluation of the overall effectiveness of a reading program
  • To all students, at the end of the school year or semester

Source: Diamond, Gutlohn, Honig, 2008, Teaching Reading Sourcebook, p. 11.

Example of Curriculum Embedded Assessment Analysis

Using lesson assessments and analyzing student answers can help teachers to inform individual and group instruction.

Features of lesson assessments include:

  • Students provide individual responses for teachers to determine mastery of skills taught.
  • A set of pre-determined scoring criteria for scoring students’ performances, focusing on the skills and application of the skills taught.
  • Information teachers can use to determine how effective the instruction has been, which students may need additional support, and where gaps exist in understanding the content.

How to use lesson assessments


Lesson 1
Assess-ment

Comprehension
Multiple Choice

Comprehension
Written Response

Focus Skill
Character Traits & Motivation

Vocabulary

Synonyms
Antonyms

Grammar

Fluency & Accuracy

Overall

Date:
9/4/09

8-6

5-4

3-0

2

1

0

10-8

7-5

4-0

6-5

4-3

2-0

10-8

7-5

4-0

8-6

5-4

3-0

≥ 93

71-92

≤ 70

44 -35

29-22

16-0

Student 1

5

1

9

2

5

6

80

28

Student 2

7

2

9

5

10

8

112

41

Student 3

2

1

4

4

8

6

70

25

The chart above lists categories (i.e. comprehension multiple choice, vocabulary, and grammar), which are areas that the assessment would identify as focus skills. The first set of numbers, next to the date list the possible scores for the categories. Within each focus skill area, the benchmark, strategic, and intensive range has been clarified and color coded for ease in analyzing the data.

  • Benchmark indicates that a student is on track for success and is color coded green.
  • Strategic indicates the student is slightly off track for success and may need some additional support to ensure success. This usually comes in the form of scaffolding, which could include pre-teaching or re-teaching parts of the lesson. Strategic is color coded in yellow.
  • Intensive students are those students that are off track and need more immediate and intensive intervention to prevent further reading failure.

The sample data viewed horizontally identifies which individual students are of concern and in which particular categories. Student 3, whose overall performance is barely strategic, has three areas that stand out as areas of major concern while two areas need some additional support. Student 2 is meeting the performance requirements and may need extension activities. When analyzing these three students, the teacher also needs to look for patterns of concern like overall difficulty with learning new vocabulary words. The lesson assessments may reflect similar areas of concern or it may reflect an isolated incident.

The sample data viewed vertically identifies areas of strength and weakness. The chart shows grammar instruction as a strength for all three students. Comprehension is an overall weakness. Looking at both the multiple choice and written response, there is reason for concern. Reviewing instruction and student engagement would be important. Several questions could be discussed:

  1. Were there enough practice opportunities provided in the curriculum?
  2. How many practice opportunities did students receive?
  3. Did all students participate in the guided practice and feedback?
  4. Did all students receive specific, corrective feedback?
  5. Were all students actively engaged?
  6. What formative assessments were used during instruction to determine student understanding?

Protheroe (2001, p. 2) states, “There is a growing body of evidence that the use of high-quality, targeted assessment data, in the hands of school staff trained to use it effectively, can improve instruction.”

A Prevention Model (American Federation of Teachers, 2004) has three essential practices:

  1. An established systematic process for screening all students in early grades to determine which students have skill deficits in the critical areas and in which areas.
  2. Procedures to provide date-informed, differentiated intervention instruction in small groups.
  3. Continual monitoring of student progress to ensure needs are being met.

The prevention model is based on the ideas that all but a few students can be taught to read proficiently with early identification being key. It also relies on effective assessment tools in determining the needs of instruction.

One effective tool is a data wall or board. After screening all students with an assessment such as DIBELS, a visual representation of the results can be created called a data board.

Picture1.jpg T hese walls/boards provide visual displays of color coded results that act as a visual to initiate effective discussions about instruction. Several students have a pink label with a layer of yellow. This indicates that the students went from intensive in the fall to strategic in the winter. The winter data was layered over the top of the fall with the winter still being visible. This is a powerful visual of “movement” of students.

A student's label might look like this:

Name: Danny Vincent

1-A

Fall

LNF 14

18, 20, 30, 36, 39, 42

Text Box: This area is for additional assessments that the school would like to monitor

PSF 25

NWF 12

Winter

PSF 65

NWF 45

ORF 26

Spring: PSF _____ NWF _____ ORF _____

TOP ROW:
Name of student, grade & teacher’s last name initial

Yellow indicates that the student was strategic in the fall on the overall instructional recommendation for DIBELS. The LNF (Letter Naming Fluency) was deemed intensive, PSF (Phonemic Awareness Fluency) and NWF (Nonsense Word Fluency) were deemed benchmark and color coded accordingly. The numbers right of LNF for fall are the progress monitoring scores the student received. Progress is evident. In the winter, the student was at benchmark in all areas.

After collecting the data and creating a visual display, data meetings are convened. A developed assessment team that includes all necessary stakeholders begins discussing questions, centered on data that will assist in making instructional decisions for students. Taking time to purposefully plan data meetings ensures a more effective analysis of data and collaboration among colleagues. Effective analysis and collaboration, when done well, allow for instructional planning that leads to increases in student achievement. See data meeting planning template.

Data Meetings

Based on the experience of coaches involved in RF in WY, ID, and MT (coach institute, Jackson Hole Wyoming, 2007).

Planning the Meeting

Logistics

  • Comfortable location- adult chairs and enough room to spread out necessary materials.
  • Visual display of the data and goals- post the grade level goals and create a visual data wall.
  • Materials Ready- provide an agenda ahead of time with specific lists of items to bring such as data reports, student booklets, diagnostic assessments, and observational data.
  • Time- plan for 1 to 1 ½ hours

Conducting the Data Meeting

  • Establish norms- ensure all members have a chance to speak
  • Minimize venting
  • Identify Roles- to help stay focused on the task and time, designate a time keeper and note taker to follow the agenda
  • Provide Snacks- IT ALWAYS HELPS
  • Create a process for analyzing data such as the instructional profiles listed below. Discuss student progress, reorganize intervention groups if necessary, identify additional students who may need intensive support
  • Adjust data wall

Follow-up from the Meeting

  • Action Planning- create an action plan that lists goals, responsibilities and measure of progress/success. Use this at the next data meeting to determine strengths and weaknesses of implementing the action plan.
  • Next Meeting- schedule next meeting

Stiggins (2001) shares two conditions necessary to integrate assessment into the teaching and learning process:

  1. To assess student achievement accurately, teachers and administrators must understand the achievement targets students are to master.
  2. An assessment literate faculty has two skills: the ability to gather dependable and quality information about student achievement, and the ability to use that information effectively to maximize student achievement (2001, p. 20).

School Level Data Meetings

  1. Set goals and make the goals known.
    1. Ensure the goals are attainable, measurable, specific to a skill and have a timeline.
  2. Schedule the staff meeting, allowing an hour to 1 ½ hours for the meeting.
  3. To prepare for the meeting.
    1. Make predictions. Which staff members might question the data? Will specific students be an issue? Which data will be used to demonstrate how you are doing as a school?
    2. Choose beacon data/teachers that have pre-approved use of individual data for the school-wide meeting.
  4. Prepare meeting agenda.
  5. Bring ALL data to the meeting (varying sources such as teacher observation and lesson assessments).
    1. DIBELS report examples
      1. Summary of Impact by School/District (eventually)
      2. Summary of Impact by Class (eventually)
      3. Instructional Recommendation Report
      4. Student Booklets and Progress Monitoring Graphs
  6. Remind staff of school goals.
  7. Review the data meeting process with staff including meeting norms.
  8. Have staff members make predictions about the current data.
    1. Which students are on target?
    2. Which students are advanced?
    3. Which students need a little extra scaffolding?
    4. Which students need a lot of extra scaffolding?
    5. What resources and professional development needs to be provided?
  9. Ask the question, “How are we doing?”
  10. “Drill Down” to the student level.
    1. After looking at school level, drill down to grade level and as necessary drill down to class and student level to answer questions.
  11. Pose the question, “How Do We Get There?”
    Remind staff that this question will be addressed in detail at upcoming grade-level meetings that focus on data analysis and decision-making. Each grade level will set goals, evaluate current instruction, and make intervention adjustments based on the data.
    1. Discuss current school-wide issues that might answer this question (e.g. student engagement, program implementation, pacing)
    2. Allow plenty of discussion time

See also sample questions to ask during the data meetings.

Additional Sample Questions to Ask During the Data Meetings

The examples given were developed using DIBLES and AIMSweb but other progress monitoring assessments such as lesson assessments could be used.

School Level Data Meeting

Make Predictions - Give teachers an opportunity to make predictions about school level performance.

Based on the fall data, what do we expect of the winter data?
Percent of students advanced?
Percent of students at benchmark/proficient?
Percent of students at strategic/nearing proficient?
Percent of students at intensive/novice?

Confirm Predictions - Present the data
Look at each grade level’s percentage at benchmark/proficient, strategic/nearing proficient and intensive/novice in the fall and compare it to the winter.
Which grade levels maintained or increased the number of benchmark students by winter?
Which grade levels reduced the number of strategic students by moving them to benchmark?
Which grade levels reduced the number of intensive students by moving them to strategic or benchmark?

Grade Level Data Meetings

Make Predictions - Give teachers an opportunity to make predictions about classroom level performance.
Look at the intensive students’ fall data. Predict the level of performance at the winter benchmark assessment.

Look at the strategic students’ fall data. Predict the level of performance at the winter benchmark assessment.

Look at the benchmark students’ fall data. Predict the level of performance at the winter benchmark assessment.

Confirm Predictions -
Why did the child/children match the prediction? Why did the child/children not match the prediction?
Why did the child/children match the prediction? Why did the child/children not match the prediction?
Why did the child/children match the prediction? Why did the child/children not match the prediction? Additional Questions for Digging Deeper
Did the student score in a similar range on previous assessments?
Is this a long-standing performance pattern for the students?
Is this student identified for special education services and has he/she already had diagnostic assessment?
In the case of students on IEPs, are they already receiving additional reading instruction?
Do content teachers recommend the student receive diagnostic assessment of his/her reading abilities?
Improving Adolescent Reading: Findings from Research (2004, p. 30).

Back to top

Exemplary Practices in Action

Go back to The Elements outline

Quick Links

 

Contact Us

Colorado Dept. of Education
201 East Colfax Ave.
Denver, CO 80203
Phone: 303-866-6600
Fax: 303-830-0793
Contact CDE

Hours:
Mon - Fri 8 a.m. to 5 p.m.

Educator Licensing
Location & Hours