(ASPP) – October 2010

Download Report

Transcript (ASPP) – October 2010

Rate of Improvement
Version 2.0: Research Based
Calculation and Decision Making
Caitlin S. Flinn, EdS, NCSP
Andrew E. McCrea, MS, NCSP
Matthew Ferchalk EdS, NCSP
ASPP Conference 2010
Today’s Objectives
Explain what RoI is, why it is important,
and how to compute it.
 Establish that Simple Linear Regression
should be the standardized procedure for
calculating RoI.
 Discuss how to use RoI within a problem
solving/school improvement model.

RoI Definition

Algebraic term: Slope of a line
Vertical change over the horizontal change
 Rise over run
 m = (y2 - y1) / (x2 - x1)
 Describes the steepness of a line (Gall & Gall,
2007)

RoI Definition

Finding a student’s RoI = finding the slope
of a line


Using two data points on that line
Finding the line itself
Linear regression
 Ordinary Least Squares

How does Rate of
Improvement Fit into the
Larger Context?
School Improvement/Comprehensive School Reform
Response to Intervention
Dual Discrepancy: Level & Growth
Rate of Improvement
School Improvement/Comprehensive
School Reform
Grade level content expectations (ELA,
math, science, social studies, etc.).
 Work toward these expectations through
classroom instruction.
 Understand impact of instruction through
assessment.

Assessment

Formative Assessments/High Stakes Tests


Does student have command of content
expectation (standard)?
Universal Screening using CBM

Does student have basic skills appropriate for
age/grade?
Assessment
Q: For students who are not proficient on
grade level content standards, do they
have the basic reading/writing/math skills
necessary?
 A: Look at Universal Screening; if above
criteria, intervention geared toward content
standard, if below criteria, intervention
geared toward basic skill.

Progress Monitoring
Frequent measurement of knowledge to
inform our understanding of the impact of
instruction/intervention.
 Measures of basic skills (CBM) have
demonstrated reliability & validity (see table at

www.rti4success.org).
Classroom Instruction (Content Expectations)
Measure Impact (Test)
Proficient!
Use Diagnostic
Test to Differentiate
Non Proficient
Content Need?
Basic Skill Need?
Intervention
Progress Monitor
Intervention
Progress Monitor
With CBM
If CBM is
Appropriate
Measure
Rate of Improvement
So…





Rate of Improvement (RoI) is how we
understand student growth (learning).
RoI is reliable and valid (psychometrically
speaking) for use with CBM data.
RoI is best used when we have CBM data,
most often when dealing with basic skills in
reading/writing/math.
RoI can be applied to other data (like
behavior) with confidence too!
RoI is not yet tested on typical Tier I formative
classroom data.
RoI is usually applied to…
Tier One students in the early grades at
risk for academic failure (low green kids).
 Tier Two & Three Intervention Groups.
 Special Education Students (and IEP
goals)
 Students with Behavior Plans

RoI Foundations

Deno, 1985

Curriculum-based measurement
 General
outcome measures
 Short
 Standardized
 Repeatable
 Sensitive
to change
RoI Foundations

Fuchs & Fuchs, 1998

Hallmark components of Response to
Intervention
 Ongoing
formative assessment
 Identifying non-responding students
 Treatment fidelity of instruction

Dual discrepancy model
 One
standard deviation from typically performing
peers in level and rate
RoI Foundations
 Ardoin
 Slope
& Christ, 2008
for benchmarks (3x per year)
 More growth from fall to winter than
winter to spring
 Might be helpful to use RoI for fall to
winter
 And a separate RoI for winter to spring
RoI Foundations

Fuchs, Fuchs, Walz, & Germann, 1993
Typical weekly growth rates
 Needed growth

 1.5
to 2.0 times typical slope to close gap in a
reasonable amount of time
RoI Foundations

Deno, Fuchs, Marston, & Shin, 2001

Slope of frequently non-responsive children
approximated slope of children already
identified as having a specific learning
disability
RoI & Statistics

Gall & Gall, 2007


10 data points are a minimum requirement for
a reliable trendline
How does that affect the frequency of
administering progress monitoring probes?
Importance of Graphs

Vogel, Dickson, & Lehman, 1990

Speeches that included visuals, especially in
color, improved:
 Immediate
recall by 8.5%
 Delayed recall (3 days) by 10.1%
Importance of Graphs
“Seeing is believing.”
 Useful for communicating large amounts
of information quickly
 “A picture is worth a thousand words.”
 Transcends language barriers (Karwowski,
2006)
 Responsibility for accurate graphical
representations of data

Skills Typically Graphed

Reading



Oral Reading Fluency
Word Use Fluency
Reading Comprehension









MAZE
Retell Fluency
Early Literacy Skills



Initial Sound Fluency
Letter Naming Fluency
Letter Sound Fluency
Phoneme Segmentation Fluency
Nonsense Word Fluency
Spelling
Written Expression
Behavior
Math



Math Computation
Math Facts
Early Numeracy




Oral Counting
Missing Number
Number
Identification
Quantity
Discrimination
Importance of RoI

Visual inspection of slope

Multiple interpretations

Instructional services

Need for explicit guidelines
Ongoing Research


RoI for instructional decisions is not a perfect
process
Research is currently addressing sources of
error:



Christ, 2006: standard error of measurement for slope
Ardoin & Christ, 2009: passage difficulty and
variability
Jenkin, Graff, & Miglioretti, 2009: frequency of
progress monitoring
Future Considerations

Questions yet to be empirically answered
What parameters of RoI indicate a lack of RtI?
 How does standard error of measurement
play into using RoI for instructional decision
making?
 How does RoI vary between standard
protocol interventions?
 How does this apply to non-English speaking
populations?

How is RoI Calculated?
Which way is best?
Multiple Methods for
Calculating Growth
Visual Inspection Approaches
 “Eye Ball” Approach
 Split Middle Approach
 Tukey Method
Quantitative Approaches
 Last point minus First point Approach
 Split Middle & Tukey “plus”
 Linear Regression Approach
The Visual Inspection
Approaches
Eye Ball Approach
20
19
18
17
16
14
14
14
12
11
10
10
8
8
7
6
4
2
0
1
2
3
4
5
6
7
8
Split Middle Approach

Drawing “through the two points obtained
from the median data values and the
median days when the data are divided
into two sections”
(Shinn, Good, & Stein, 1989).
Split Middle
20
19
18
17
16
14
X(14)
14
14
12
10
10
8
11
X (9)
X(9)
8
7
6
4
2
0
1
2
3
4
5
6
7
8
Tukey Method
Divide scores into 3 equal groups
 Divide groups with vertical lines
 In 1st and 3rd groups, find median data
point and median week and mark with an
“X”
 Draw line between two “Xs”

(Fuchs, et. al., 2005. Summer Institue Student progress monitoring for math.
http://www.studentprogress.org/library/training.asp)
Tukey Method
20
19
18
17
16
14
X(14)
14
14
12
11
10
10
8
8
X(8)
7
6
4
2
0
1
2
3
4
5
6
7
8
The Quantitative Approaches
Last minus First

Iris Center: last probe score minus first
probe score over last administration period
minus first administration period.
Y2-Y1/X2-X1= RoI
http://iris.peabody.vanderbilt.edu/resources.html
Last minus First
20
19
18
17
16
14
14
14
12
11
10
10
8
8
7
(14-8)/(8-0)=0.75
6
4
2
0
1
2
3
4
5
6
7
8
Split Middle “Plus”
20
19
18
17
16
14
X(14)
14
14
12
11
10
10
8
X(9)
8
7
6
(14-9)/8=0.63
4
2
0
1
2
3
4
5
6
7
8
Tukey Method “Plus”
20
19
18
17
16
14
X(14)
14
14
12
11
10
10
8
8
X(8)
7
6
(14-8)/8=0.75
4
2
0
1
2
3
4
5
6
7
8
Linear Regression
20
19
18
17
16
14
14
14
12
11
10
10
8
8
7
6
y = 1.1429x + 7.3571
4
2
0
1
2
3
4
5
6
7
8
RoI Consistency?
Any Method of
Visual Inspection
???
Last minus First
0.75
Split Middle
“Plus”
Tukey “Plus”
0.63
Linear
Regression
1.10
0.75
RoI Consistency?


If we are not all using the same model to
compute RoI, we continue to have the same
problems as past models, where under one
approach a student meets SLD criteria, but
under a different approach, the student does not.
Hypothetically, if the RoI cut-off was 0.65 or
0.95, different approaches would come to
different conclusions on the same student.
RoI Consistency?


Last minus First (Iris Center) and Linear
Regression (Shinn, etc.) only quantitative
methods discussed in CBM literature.
Study of 37 at risk 2nd graders:
Difference in RoI b/w LmF & LR
Methods
Whole Year
0.26 WCPM
Fall
Spring
0.31 WCPM
0.24 WCPM
McCrea (2010) Unpublished data
Technical Adequacy

Without a consensus on how to compute
RoI, we risk falling short of having
technical adequacy within our model.
So, Which RoI Method is Best?
Literature shows that Linear
Regression is Best Practice


Student’s daily test scores…were entered into a
computer program…The data analysis program
generated slopes of improvement for each level
using an Ordinary-Least Squares procedure
(Hayes, 1973) and the line of best fit.
This procedure has been demonstrated to
represent CBM achievement data validly within
individual treatment phases (Marston, 1988;
Shinn, Good, & Stein, in press; Stein, 1987).
Shinn, Gleason, & Tindal, 1989
Growth (RoI) Research
using Linear Regression




Christ, T. J. (2006). Short-term estimates of growth using
curriculum based measurement of oral reading fluency:
Estimating standard error of the slope to construct confidence
intervals. School Psychology Review, 35, 128-133.
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using
curriculum based measurement to establish growth standards
for students with learning disabilities. School Psychology
Review, 30, 507-524.
Good, R. H. (1990). Forecasting accuracy of slope estimates for
reading curriculum based measurement: Empirical evidence.
Behavioral Assessment, 12, 179-193.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L. & Germann, G.
(1993). Formative evaluation of academic progress: How much
growth can we expect? School Psychology Review, 22, 27-48.
Growth (RoI) Research
using Linear Regression



Jenkins, J. R., Graff, J. J., & Miglioretti, D.L. (2009).
Estimating reading growth using intermittent CBM
progress monitoring. Exceptional Children, 75, 151-163.
Shinn, M. R., Gleason, M. M., & Tindal, G. (1989).
Varying the difficulty of testing materials: Implications for
curriculum-based measurement. The Journal of Special
Education, 23, 223-233.
Shinn, M. R., Good, R. H., & Stein, S. (1989).
Summarizing trend in student achievement: A
comparison of methods. School Psychology Review, 18,
356-370.
So, Why Are There So Many
Other RoI Models?
Ease of application
 Focus on Yes/No to goal acquisition, not
degree of growth
 How many of us want to calculate OLS
Linear Regression formulas (or even
remember how)?

Pros and Cons of Each
Approach
Pros
Eye Ball
Split
Middle &
Tukey
Easy
Understandable
No software needed
Compare to
Aim/Goal line
Yes/No to goal
acquisition
Cons
Subjective
No statistic
provided, no
idea of the
degree of
growth
Pros and Cons of Each
Approach
Pros
Last minus
First
Provides a growth
statistic
Easy to compute
Cons
Does not consider
all data points,
only two
Split Middle & Considers all data
Tukey “Plus” points.
Easy to compute
No support for
“plus” part of
methodology
Linear
Regression
Calculating the
statistic
All data points
Best Practice
An Easy and
Applicable Solution
Get Out Your Laptops!
Open Microsoft Excel
I love
ROI
Graphing RoI
For Individual Students
Programming Microsoft Excel to
Graph Rate of Improvement:
Fall to Winter
Setting Up Your Spreadsheet
In cell A1, type 3rd Grade ORF
 In cell A2, type First Semester
 In cell A3, type School Week
 In cell A4, type Benchmark
 In cell A5, type the Student’s Name
(Swiper Example)

Labeling School Weeks
Starting with cell B3, type numbers 1
through 18 going across row 3
(horizontal).
 Numbers 1 through 18 represent the
number of the school week.
 You will end with week 18 in cell S3.

Labeling Dates

Note: You may choose to enter the date of
that school week across row 2 to easily
identify the school week.
Entering Benchmarks
(3rd Grade ORF)

In cell B4, type 77.
This is your fall
benchmark.

In cell S4, type 92.
This is your winter
benchmark.
Entering Student Data (Sample)






Enter the following
numbers, going
across row 5, under
corresponding week
numbers.
Week 1 – 41
Week 8 – 62
Week 9 – 63
Week 10 – 75
Week 11 – 64






Week 12 – 80
Week 13 – 83
Week 14 – 83
Week 15 – 56
Week 17 – 104
Week 18 – 74
*CAUTION*
If a student was not assessed during a
certain week, leave that cell blank
 Do not enter a score of Zero (0) it will be
calculated into the trendline and
interpreted as the student having read
zero words correct per minute during that
week.

Graphing the Data
Highlight cells A4 and A5 through S4 and
S5
 Follow Excel 2003 or Excel 2007
directions from here

Graphing the Data

Excel 2003


Across the top of your
worksheet, click on
“Insert”
In that drop-down
menu, click on “Chart”

Excel 2007



Click Insert
Find the icon for Line
Click the arrow below
Line
Graphing the Data

Excel 2003

A Chart Wizard
window will appear

Excel 2007

6 graphics appear
Graphing the Data

Excel 2003


Choose “Line”
Choose “Line with
markers…”

Excel 2007

Choose “Line with
markers”
Graphing the Data

Excel 2003


“Data Range” tab
“Columns”

Excel 2007

Your graph appears
Graphing the Data

Excel 2003



“Chart Title”
“School Week” X Axis
“WPM’ Y Axis

Excel 2007

Change your labels by
right clicking on the
graph
Graphing the Data

Excel 2003

Choose where you
want your graph

Excel 2007

Your graph was
automatically put into
your data spreadsheet
Graphing the Trendline

Excel 2003


Excel 2007
Right click on any of the student data points
Graphing the Trendline

Excel 2003

Choose “Linear”

Excel 2007
Graphing the Trendline

Excel 2003


Excel 2007
Choose “Custom” and check box next to
“Display equation on chart”
Graphing the Trendline
Clicking on the equation highlights a box
around it
 Clicking on the box allows you to move it
to a place where you can see it better

Graphing the Trendline
You can repeat the same procedure to
have a trendline for the benchmark data
points
 Suggestion: label the trendline Expected
ROI
 Move this equation under the first

Individual Student Graph
120
y = 2.5138x + 42.113
y = 0.8824x + 76.118
100
80
Benchmark
Student: Sw iper
60
RoI
Linear (Benchmark)
40
20
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Individual Student Graph
The equation indicates the slope, or rate of
improvement.
 The number, or coefficient, before "x" is
the average improvement, which in this
case is the average number of words per
minute per week gained by the student.

Individual Student Graph
The rate of improvement, or trendline, is
calculated using a linear regression, a
simple equation of least squares.
 To add additional progress
monitoring/benchmark scores once you’ve
already created a graph, enter additional
scores in Row 5 in the corresponding
school week.

Individual Student Graph
The slope can change depending on
which week (where) you put the
benchmark scores on your chart.
 Enter benchmark scores based on when
your school administers their benchmark
assessments for the most accurate
depiction of expected student progress.

Why Graph only 18
Weeks at a Time?
Assuming Linear Growth…
…Finding Curve-linear Growth
Non-Educational Example of
Curve-linear Growth
Weight Loss Chart
205
200
10 Week RoI = -2.5
First 5 Weeks RoI = -3.6
Second 5 Weeks RoI = -1.5
200
197.5
195
193
Weight
190
189.5
186
185
184
182.5
181
180
179.5
178
175
170
165
1
2
3
4
5
6
Weeks
7
8
9
10
70
Academic Example of
Curvilinear Growth
60
50
40
WCPM
MOY to EOY = 1.19
30
20
BOY to MOY = 1.60
BOY to EOY = 1.35
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Weeks
McCrea, 2010
Looked at Rate of Improvement in small
2nd grade sample
 Found differences in RoI when computed
for fall and spring:
 Ave RoI for fall:
1.47 WCPM
 Ave RoI for spring:
1.21 WCPM

Ardoin & Christ, 2008
Slope for benchmarks (3x per year)
 More growth from fall to winter than winter
to spring

Christ, Yeo, & Silberglitt, in
press
Growth across benchmarks (3X per year)
 More growth from fall to winter than winter
to spring
 Disaggregated special education
population

Graney, Missall, & Martinez,
2009
Growth across benchmarks (3X per year)
 More growth from winter to spring than fall
to winter with R-CBM.

Fien, Park, Smith, & Baker,
2010
Investigated relationship b/w NWF gains
and ORF/Comprehension
 Found greater NWF gains in fall than in
spring.

DIBELS
th
(6 )
ORF Change in
Criteria
Fall to Winter
Winter to
Spring
2nd
24
22
3rd
15
18
4th
13
13
5th
11
9
6th
11
5
AIMSweb Norms
Based on 50th
Percentile
Fall to Winter
Winter to
Spring
1st
18
31
2nd
25
17
3rd
22
15
4th
16
13
5th
17
15
6th
13
12
Speculation as to why Differences
in RoI within the Year




Relax instruction after high stakes testing in
March/April; a PSSA effect.
Depressed BOY benchmark scores due to
summer break; a rebound effect (Clemens).
Instructional variables could explain differences
in Graney (2009) and Ardoin (2008) & Christ (in
press) results (Silberglitt).
Variability within progress monitoring probes
(Ardoin & Christ, 2008) (Lent).
Programming Excel
Calculating Needed RoI
Calculating Actual (Expected) RoI –
Benchmark
Calculating Actual RoI - Student
Calculating Needed RoI






In cell T3, type Needed RoI
Click on cell T5
In the fx line (at top of sheet) type this formula
=((S4-B5)/18)
Then hit enter
Your result should read: 2
This formula simply subtracts the student’s
actual middle of year (MOY) benchmark from the
expected end of year (EOY) benchmark, then
dividing by 18 for the first 18 weeks (1st
semester).
Calculating Actual (Expected) RoI Benchmark






In cell U3, type Actual RoI
Click on cell U4
In the fx line (at top of sheet) type this formula
=SLOPE(B4:S4,B3:S3)
Then hit enter
Your result should read: 1.06
This formula considers 18 weeks of benchmark
data and provides an average growth or change
per week.
Calculating Actual RoI - Student
Click on cell U5
 In the fx line (at top of sheet) type this
formula =SLOPE(B5:S5,B3:S3)
 Then hit enter
 Your result should read: 1.89
 This formula considers 18 weeks of
student data and provides an average
growth or change per week.

ROI as a Decision Tool
within a Problem-Solving Model
Steps
1.
2.
3.
4.
Gather the data
Ground the data & set goals
Interpret the data
Figure out how to fit Best Practice into
Public Education
Step 1: Gather Data
Universal Screening
Progress Monitoring
Common Screenings in PA
DIBELS
 AIMSweb
 MBSP
 4Sight
 PSSA

Validated Progress
Monitoring Tools
DIBELS
 AIMSweb
 MBSP
 www.studentprogress.org

Step 2: Ground the Data
1) To what will we compare our
student growth data?
2) How will we set goals?
Multiple Ways to
Look at Growth




Needed Growth
Expected Growth & Percent of Expected Growth
Fuchs et. al. (1993) Table of Realistic and
Ambitious Growth
Growth Toward Individual Goal*
*Best Practices in Setting Progress Monitoring Goals for Academic Skill
Improvement (Shapiro, 2008)
Needed Growth
Difference between student’s BOY (or
MOY) score and benchmark score at MOY
(or EOY).
 Example: MOY ORF = 10, EOY
benchmark is 40, 18 weeks of instruction
(40-10/18=1.67). Student must gain 1.67
wcpm per week to make EOY benchmark.

Expected Growth
Difference between two benchmarks.
 Example: MOY benchmark is 20, EOY
benchmark is 40, expected growth (4020)/18 weeks of instruction = 1.11 wcpm
per week.

Looking at Percent of
Expected Growth
Tier I
Tier II
Tier III
Greater
than 150%
Between
110% &
150%
Possible LD
Between
95% & 110%
Likely LD
Between
80% & 95%
Below 80%
May Need
More
May Need
More
Needs More Needs More
Tigard-Tualatin School District
(www.ttsd.k12.or.us)
Likely LD
Likely LD
Oral Reading Fluency Adequate
Response Table
Realistic
Growth
Ambitious
Growth
1st
2.0
3.0
2nd
1.5
2.0
3rd
1.0
1.5
4th
0.9
1.1
5th
0.5
0.8
Fuchs, Fuchs, Hamlett, Walz, & Germann
(1993)
Digit Fluency Adequate
Response Table
1st
Realistic
Growth
0.3
Ambitious
Growth
0.5
2nd
0.3
0.5
3rd
0.3
0.5
4th
0.75
1.2
5th
0.75
1.2
Fuchs, Fuchs, Hamlett, Walz, & Germann
(1993)
From Where Should
Benchmarks/Criteria Come?

Appears to be a theoretical convergence
on use of local criteria (what scores do our
students need to have a high probability of
proficiency?) when possible.
Test Globally…
…Benchmark Locally
Objectives
Rationale for developing Local
Benchmarks
 Fun with Excel!
 Fun with Algebra!
 Local Benchmarks in Action

Rational for Developing Local
Benchmarks

Stage & Jacobson (2001)


McGlinchy & Hixon (2004)


Oral Reading Fluency is highly connected to state test
performance and is and is accurate at predicting those students
who are likely to not meet proficiency.
Shapiro et al. (2006)


Results support the use of CBM for determine which students
are at risk for reading failure and who will fail state tests
Hintze & Silberglitt (2005)


Slope in Oral Reading Fluency reliably predicted performance on
Washington Assessment of Student Learning
Results of this study show that CBM and be a valuable source to
identify which student are likely to be successful or fail state
tests.
Ask Jason Pedersen!
Rational for Developing Local
Benchmarks






Identify and validate problems
Creating ideas for instructional grouping, focus,
or intensity
Goal setting
Determining the focus and frequency of
progress monitoring
Exiting student or moving students to different
level or tiers of intervention
Systems level resource allocation and
evaluation
(Stewart & Silberglitt, 2008)
Rationale for Developing Local
Benchmarks

Silberglitt (2008)


Districts should “refrain from simply adopting a set of
national target scores, as these scores may or may
not be relevant to the high-stakes outcomes for which
their students must be adequately prepared.” (p.
1871)
“By linking local assessments to high-stakes tests,
users are able to establish target scores on these
local assessments, scores that divide students
between those who are likely and those who are
unlikely to achieve success on the high-stakes test.”
(p. 1870)
Rationale for Developing Local
Benchmarks



Discrepancy across states, in terms of the
percentile ranks on a nationally administered
assessment necessary to predict successful
state test performance (Kingsbury et al., 2004)
“Using cut scores based on the probability of
success on an upcoming state-mandated
assessment, might be a useful alternative to
normative date for making these decisions.
(Silberglitt & Hintz, 2005)
Can be used to separate students into groups in
an RtII framework (Silberglitt, 2008)
Rationale for Developing Local
Benchmarks



Useful in calculating discrepancy in level (Burns,
2008)
Represent the school population where the
students are getting their education (Stewart &
Silberglitt, 2008)
Teachers often use comparisons between
students in their classroom, this helps to
objectify those decisions (Stewart & Silberglitt,
2008)
Rationale for Developing Local
Benchmarks

How accurately does it predict proficiency level in Third
Grade?
Correct Prediction Percentage
100%
83%
83%
ORF-Loc-1
ORF-Loc-2
80%
84%
77%
78%
80%
ORF-DIB-2
ORF-DIB-3
60%
40%
20%
0%
ORF-Loc-3
ORF-DIB-1
Assessment
(Ferchalk, Richardson & Cogan-Ferchalk, 2010)
Rationale for Developing Local
Benchmarks

Percentage of students in Third Grade predicted
to be successful on the PSSA who were actually
Successful
100%
Negative Predictive Pow er
94%
82%
81%
82%
ORF-Loc-1
ORF-Loc-2
89%
94%
80%
60%
40%
20%
0%
ORF-Loc-3
ORF-DIB-1
Assessm ents
(Ferchalk, Richardson & Cogan-Ferchalk, 2010)
ORF-DIB-2
ORF-DIB-3
Rationale for Developing Local
Benchmarks

Percentage of Third Grade students predicted to
be unsuccessful who actually failed to meet
proficiency on the PSSA
Positive Predictive Power
100%
89%
85%
91%
80%
61%
65%
64%
ORF-DIB-1
ORF-DIB-2
ORF-DIB-3
60%
40%
20%
0%
ORF-Loc-1
ORF-Loc-2
ORF-Loc-3
Assessm ents
(Ferchalk, Richardson & Cogan-Ferchalk, 2010)
Getting Started

Collect 3 or more years of student CBM and
PSSA data

Match student data for each student
Name
Harry Potter

41
51
73
PSSA
1080
Use data extract and data farming features
offered through PSSA / DIBELS / AIMSweb
websites


ORF - Fall ORF - Winter ORF - Spring
Download with student ID numbers
If you have a data warehouse…then use your
special magic…lucky!
Getting Started






Reliable and valid data
Linear / highly correlated data
Gather data with integrity
Do not teach to the test
All students should be
included in the norm group
Be cautious of cohort effects
(Stewart & Silberglitt, 2008)
Getting Started

PSSA Cut Scores



http://www.portal.state.pa.us/portal/server.pt/community/cut_scor
es/7441
Use the lower end score for Proficiency
Download the data set from:

http://sites.google.com/site/rateofimprovement/
Wisdom from Teachers
(especially from our reading specialists Tina and Kristin!)



Children do not equal dots!
They are not numbers or data points!
Having said that…
≠
Fun with Excel!
Fun with Algebra!

Matt Burns – University of Minnesota

X=(Y-a)/b
Y
= Proficiency Score on the PSSA
 a = Intercept
 b = Slope
 X=Local
Benchmark Score
(Burns, 2008)
PSSA Reading and DIBELS ORF Scatterplot
Student data
Proficient PSSA
Slope
2000
1800
y = 2.561x + 1107.7
PSSA Reading
1600
1400
1200
1000
800
600
0
20
40
60
80
100
120
DIBELS Oral Reading Fluency
140
160
180
200
PSSA Reading and DIBELS ORF Scatterplot
Student data
DIBELS Local Benchmark
Proficient PSSA
Slope
2000
1800
y = 2.561x + 1107.7
PSSA Reading
1600
1400
1200
1000
800
600
0
20
40
60
80
100
120
DIBELS Oral Reading Fluency
140
160
180
200
More Fun with Algebra!


Predict student
Proficiency Score
Resolve the equation





X=(Y-a)/b
Y=(Xb)+a

Y=Predicted PSSA Score

Use with Caution!

93wcpm in the fall
Data Sample



Student
Slope = 2.56
Intercept = 1108
Y=(93X2.56)+1108
Y=1306
Local Benchmark Applications

Northern Lebanon School District Local Benchmarks
ORF Benchmarks
Grade 3
DIBELS
Local Benchmarks
Fall
77
50
Grade 4
DIBELS
Local Benchmarks
93
73
105
96
118
103
Grade 5
DIBELS
Local Benchmarks
104
118
115
130
124
136
Grade 6
DIBELS
Local Benchmarks
109
113
120
115
125
113
Winter
92
65
Spring
110
80
Local Benchmark Applications

For those that like the DIBELS Graphs
Oral Reading Fluency
Local Benchmarks
WCPM
Rate of Improvement
WCPM
150
100
50
0
1 2
3
Sept
4
5
6
7
Oct
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
No v
Dec
Jan
Date
Feb
M arch
A pril
M ay
Local Benchmark Applications
Name
Fall
Last Name
First Name
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
Student
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
DIBELS Benchmark Assessments
Winter
Beginning
Likelyhood of
meeting
Proficiency
Middle
Likelyhood of
meeting
Proficiency
59
100
105
140
158
107
43
82
85
123
72
112
103
59
151
143
30
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
84
103
125
166
193
126
60
111
90
121
66
103
124
84
158
142
27
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Spring
End
Likelyhood of
meeting
Proficiency
102
117
140
159
Likely
Likely
Likely
Likely
149
98
109
129
166
93
113
134
102
164
166
36
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Likely
Unlikely
Diagnostic Accuracy

Sensitivity


Specificity


Of all of the students who passed the PSSA, what percentage
were accurately predicted to pass based on their ORF score
Negative Predictive Power


Of all the students who failed the PSSA, what percentage were
accurately predicted to fail based on their ORF score
Percentage of students predicted to be successful on the PSSA
who were actually Successful
Positive Predictive Power

Percentage of students predicted to be unsuccessful who
actually failed to meet proficiency on the PSSA
(Silberglitt, 2008; Silberglitt & Hintz, 2005)
PSSA Reading and DIBELS ORF Scatterplot
Student data
DIBELS Local Benchmark
Proficient PSSA
2000
False Positives
True Negatives
True Positives
False Negatives
1800
PSSA Reading
1600
1400
1200
1000
800
600
0
20
40
60
80
100
120
DIBELS Oral Reading Fluency
140
160
180
200
PSSA Reading and DIBELS ORF Scatterplot
Student data
DIBELS Local Benchmark
Proficient PSSA
2000
False Positives
True Negatives
True Positives
False Negatives
1800
PSSA Reading
1600
1400
1200
1000
800
600
0
20
40
60
80
100
120
DIBELS Oral Reading Fluency
140
160
180
200
Local Benchmarks - Method 2
Fun with SPSS!
 Logistic Regression & Roc Curves
 More accurate
 Helps to balance Sensitivity, Specificity,
Negative & Positive Predictive Power
 For more information see


Best Practices in Using Technology for DataBased Decision Making (Silberglitt, 2008)
If Local Criteria are Not an
Option
Use norms that accompany the measure
(DIBELS, AIMSweb, etc.).
 Use national norms.

Making Decisions: Best Practice
Research has yet to establish a blue print
for ‘grounding’ student RoI data.
 At this point, teams should consider
multiple comparisons when planning and
making decisions.

Making Decisions: Lessons
From the Field
When tracking on grade level, consider an
RoI that is 100% of expected growth as a
minimum requirement, consider an RoI
that is at or above the needed as optimal.
 So, 100% of expected and on par with
needed become the limits of the range
within a student should be achieving.

Is there an easy way to do all of
this?
Oral Reading Fluency
01/15/09 01/22/09 01/29/09 02/05/09 02/12/09 02/19/09 02/26/09 03/05/09 03/12/09 03/19/09 03/26/09 04/02/09 04/09/09 04/16/09 04/23/09 04/30/09 05/07/09 05/14/09
1
Benchmark
Aiden
Ava
Noah
Olivia
Liam
Hannah
Gavin
Grace
Oliver
Peyton
Josh
Riley
Mason
Zoe
Ian
Faith
David
Alexa
Hunter
Caroline
2
3
4
5
6
7
8
9
10
11
12
13
14
68
40
49
43
49
48
65
17
18
Needed RoI* Actual RoI** % of Expected
RoI
49
45
60
71
95
1.61
2.17
167%
77
57
54
87
92
2.28
2.76
213%
69
61
54
84
2.28
2.01
156%
57
70
79
83
1.39
1.50
116%
36
54
70
83
1.94
1.58
122%
52
60
82
1.72
1.20
93%
67
68
84
79
1.44
1.66
129%
46
60
74
79
2.06
1.76
136%
51
51
57
78
2.22
1.45
112%
53
54
64
64
69
40
53
48
44
63
46
68
50
49
38
42
49
53
1.29
52
49
55
50
16
90
61
59
15
47
58
75
77
1.50
1.12
87%
55
48
36
67
77
2.28
1.62
125%
54
69
67
50
76
2.67
1.76
136%
49
50
64
74
2.06
1.17
91%
34
38
42
68
55
51
58
3.11
1.44
111%
41
31
45
49
47
30
46
2.72
0.24
19%
29
36
35
36
36
29
45
44
3.39
0.75
58%
30
23
44
52
43
19
63
38
3.33
0.79
61%
18
19
25
33
33
23
28
37
4.00
0.94
73%
23
23
48
38
32
34
3.72
0.75
58%
28
20
40
37
19
30
3.44
0.02
2%
* Needed RoI based on difference betw een w eek 1 score and
Benchmark score for w eek 18 divided by 18 w eeks
53
24
28
Expected RoI at Benchmark Level
25
Oral Reading Fluency Adequate Response Table
** Actual RoI based on linear regression of all data points
Benchmarks based on DIBELS Goals
60
Realistic Grow thAmbitious Grow th
1st Grade
2.0
3.0
2nd Grade
1.5
2.0
3rd Grade
1.0
1.5
4th Grade
0.9
1.1
5th Grade
0.5
0.8
(Fuchs, Fuchs, Hamlett, Walz, & Germann 1993)
1/14/2011 1/121/2011 1/28/2011 5/14/2011
% of Expected
Needed RoI Actual RoI RoI
1
2
3
18
Benchmark 68
90
1.29
Student 22
27
56 3.78 1.89 147%
Access to Spreadsheet
Templates
http://sites.google.com/site/rateofimprove
ment/home
 Click on Charts and Graphs.
 Update dates and benchmarks.
 Enter names and benchmark/progress
monitoring data.

What about Students not on
Grade Level?
Determining Instructional Level
Independent/Instructional/Frustrational
 Instructional often b/w 40th or 50th
percentile and 25th percentile.
 Frustrational level below the 25th
percentile.
 AIMSweb: Survey Level Assessment
(SLA).

Setting Goals off of Grade Level
100% of expected growth not enough.
 Needed growth only gets to instructional
level benchmark, not grade level.
 Risk of not being ambitious enough.
 Plenty of ideas, but limited research
regarding Best Practice in goal setting off
of grade level.

Possible Solution (A)
Weekly probe at instructional level and
compare to expected and needed growth
rates at instructional level.
 Ambitious goal: 200% of expected RoI

Oral Reading Fluency
01/15/10 01/22/10 01/29/10 02/05/10 02/12/10 02/19/10 02/26/10 03/05/10 03/12/10 03/19/10 03/26/10 04/02/10 04/09/10 04/16/10 04/23/10 04/30/10 05/07/10 05/14/10
1
5th Grade
2
3
4
5
6
7
8
9
10
11
12
16
17
18
124
0.53
134
128
143
1.11
2.15
407%
92
101
115
116
129
108
121
1.11
1.43
271%
94
108
95
121
135
126
109
113
1.67
1.32
248%
99
66
100
83
92
107
109
93
1.39
0.89
168%
110
97
124
113
131
132
142
0.78
2.33
440%
96
74
108
62
107
92
94
95
1.56
0.39
74%
73
93
79
114
111
112
87
116
2.83
2.03
383%
102
112
122
103
118
135
105
112
1.22
0.51
97%
111
AB
121
121
134
131
99
120
0.72
0.11
21%
110
90
118
103
119
121
122
119
0.78
1.21
228%
105
90
79
89
120
123
119
0.94
1.22
231%
91
90
105
118
60
72
80
81
43
59
76
92
120
57
0.76
47
65
4.14
-0.06
-8%
137
110
2.71
4.94
646%
115
124
85
79
2nd Grade
15
% of Expected
RoI
104
107
3rd Grade
14
115
104
4th Grade
13
Needed RoI* Actual RoI**
0.53
55
57
66
76
66
47
66
6.27
0.49
93%
72
84
92
94
82
76
82
3.25
-0.08
-15%
81
91
70
65
73
3.21
-1.41
-267%
70
104
79
79
68
4.50
-1.45
-274%
68
90
74
63
83
91
70
104
84
71
74
86
82
77
91
1.29
0.94
1.15
89%
-0.06
2.21
171%
Possible Solution (B)
Weekly probe at instructional level for
sensitive indicator of growth.
 Monthly probes (give 3, not just 1) at
grade level to compute RoI.
 Goal based on grade level growth (more
than 100% of expected).

Step 3: Interpreting Growth
What do we do when we do not
get the growth we want?
When to make a change in instruction and
intervention?
 When to consider SLD?

When to make a change in
instruction and intervention?
Enough data points (6 to 10)?
 Less than 100% of expected growth.
 Not on track to make benchmark (needed
growth).
 Not on track to reach individual goal.

When to consider SLD?
Continued inadequate response despite:
 Fidelity with Tier I instruction and Tier
II/III intervention.
 Multiple attempts at intervention.
 Individualized Problem-Solving
approach.
Evidence of dual discrepancy…
05/14/09
Needed Ro I*
A c tual Ro I**
18
90
% o f Expec ted
Ro I
Dual Disc repanc y?
Keep On Truckin
Keep On Truckin
1.29
95
1.61
2.17
167%
92
2.28
2.76
213%
84
2.28
2.01
156%
83
1.39
1.50
116%
83
1.94
1.58
122%
82
1.72
1.20
93%
79
1.44
1.66
129%
79
2.06
1.76
136%
78
2.22
1.45
112%
77
1.50
1.12
87%
77
2.28
1.62
125%
76
2.67
1.76
136%
74
2.06
1.17
91%
58
3.11
1.44
111%
46
2.72
0.24
19%
44
3.39
0.75
58%
38
3.33
0.79
61%
37
4.00
0.94
73%
34
3.72
0.75
58%
30
3.44
0.02
2%
BIG
BIG
BIG
BIG
BIG
BIG
PROBLEMS
PROBLEMS
PROBLEMS
PROBLEMS
PROBLEMS
PROBLEMS
Growth Criteria
>125%
85% - 125%
<85%
Three Levels of Examples
Whole Class
 Small Group
 Individual Student
- Academic Data
- Behavior Data

Whole Class Example
Computation
01/15/10 01/22/10 01/29/10 02/05/10 02/12/10 02/19/10 02/26/10 03/05/10 03/12/10 03/19/10 03/26/10 04/02/10 04/09/10 04/16/10 04/23/10 04/30/10 05/07/10 05/14/10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Needed RoI* Actual RoI** % of Expected
RoI
0.35
50th Percentile
25
31
25th Percentile
19
23
Student
6.5
9
8
Student
6
7.5
8.5
Student
4.5
Student
13
Student
8.5
0.24
5.5
11
13
1.72
0.61
5
11
11.5
1.72
0.57
161%
5.5
6.5
9.5
10.5
1.72
1.06
300%
173%
8
9.3
8
5.6
9.6
9.6
1.72
-0.23
-66%
8
10.5
10.5
5.6
9.3
9
1.72
-0.03
-7%
9
8
4
8
9
1.72
0.07
21%
6
10.5
9
1.72
0.43
122%
6
8
1.72
0.07
20%
7
1.72
-0.25
-71%
-119%
Student
8.5
5.5
Student
6.5
5.5
Student
6.5
9
4.5
Student
8
10.5
4.5
6.5
4
Student
9
10
5.6
6.6
5
4.6
6.6
1.72
-0.42
8
8
8.5
4
8
6.6
1.72
-0.18
-51%
3.5
6.5
1.72
-0.24
-67%
26%
Student
Student
9
4.5
4.5
4
3.5
Student
6.5
5
6.5
9
7.5
6.5
1.72
0.09
Student
5.5
3
8
4
6.5
6.3
1.72
0.19
55%
Student
7.5
10
6.6
3.3
3
6.3
1.72
-0.46
-130%
Student
5
5.5
6.5
6
5
6
1.72
0.04
11%
Student
5
4
8
8.5
10
8
6
1.72
0.25
71%
Student
4.5
3.5
5.5
1.72
-0.03
-8%
5
5.3
1.72
-0.14
-40%
Student
6
5
2.5
5.5
4.5
10.5
* Needed RoI based on difference betw een w eek 1 score and Benchmark score for w eek 18 divided by 18 w eeks
11
Digit Fluency Adequate Response Table
** Actual RoI based on linear regression of all data points
Percentiles based on AIMSw eb Grow th Tables
Expected RoI at 50th Percentile
Expected RoI at 25th Percentile
Realistic Grow thAmbitious Grow th
1st Grade
0.3
0.5
2nd Grade
0.3
0.5
3rd Grade
0.3
0.5
4th Grade
0.75
1.2
5th Grade
0.75
1.2
(Fuchs, Fuchs, Hamlett, Walz, & Germann 1993)
3rd Grade Math Whole Class
Who’s responding?
 Effective math
instruction?
 Who needs more?

N=19
 4 > 100% growth
 15 < 100% growth
 9 w/ negative
growth

Small Group Example
Oral Reading Fluency
09/11/09 09/18/09 09/25/09 10/02/09 10/09/09 10/16/09 10/23/09 10/30/09 11/06/09 11/13/09 11/20/09 11/27/09 12/04/09 12/11/09 12/18/09 01/01/10 01/08/10 01/15/10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Needed RoI* Actual RoI** % of Expected
RoI
68
1.41
Benchmark
44
Student
35
39
41
45
42
45
52
57
62
1.83
1.49
106%
Student
28
38
42
40
50
55
64
72
74
2.22
2.77
196%
Student
26
28
32
31
27
29
35
34
38
2.33
0.57
41%
Student
31
35
39
45
42
47
53
58
65
2.06
1.90
135%
Student
40
44
38
48
52
64
72
74
78
1.56
2.62
186%
* Needed RoI based on dif ference between week 1 score
and Benchmark score for week 18 divided by 18 weeks
Oral Reading Fluency Adequte Response Table
** Actual RoI based on linear regression of all data points
Benchmarks based on DIBELS Goals
Expected RoI at Benchmark Level
Realistic GrowthAmbitious Growth
1st Grade
2.0
3.0
2nd Grade
1.5
2.0
3rd Grade
1.0
1.5
4th Grade
0.9
1.1
5th Grade
0.5
0.8
(Fuchs, Fuchs, Hamlett, Walz, & Germann 1993)
Intervention Group
Intervention working for how many?
 Can we assume fidelity of intervention
based on results?
 Who needs more?

Individual Kid Example
2nd Grade Reading Progress
100
y = 1.5333x + 42.8
90
90
80
79
Words Read Correct Per Minute
74
70
68
60
60
56
53
y = 0.9903x + 36.873
53
50
48
46
45
44
40
31
30
20
10
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
09/12/08 09/19/0809/26/0810/03/08 10/10/08 10/17/08 10/24/08 10/31/08 11/07/08 11/14/08 11/21/08 11/28/08 12/05/08 12/12/08 12/19/08 01/16/09 01/23/09 01/30/09 02/06/0902/13/09 02/20/0902/27/0903/06/09 03/13/0903/20/0903/27/0904/03/09 04/10/09 04/17/0904/24/09 05/01/09
Benchmark
Linear (Benchmark)
Linear
Individual Kid
Making growth?
 How much (65% of expected growth).
 Atypical growth across the year (last 3
data points).
 Continue? Make a change? Need more
data?

RoI and Behavior?
Percent of Time Engaged in Appropriate Behavior
100
90
y = 7.2143x - 1.5
80
70
y = 3.9x + 19.8
Percent
60
50
40
y = 2x + 22
30
20
10
0
1
2
Baseline
3
4
Condition 1
5
6
Condition 2
7
8
9
Linear (Baseline)
10
11
12
Linear (Condition 1)
13
14
Linear (Condition 2)
15
16
17
Linear (Condition 2)
18
Step 4: Figure out how to fit
Best Practice into Public
Education
Things to Consider
Who is At-Risk and needs progress
monitoring?
 Who will collect, score, enter the data?
 Who will monitor student growth, when,
and how often?
 What changes should be made to
instruction & intervention?
 What about monitoring off of grade level?

Who is At-Risk and needs
progress monitoring?

Below level on universal screening
Entering 4th Grade Example
DORF
(110)
Student A
115
ISIP
TRWM
(55)
58
4Sight
(1235)
PSSA
(1235)
1255
1232
Student B
85
48
1216
1126
Student C
72
35
1056
1048
Who will collect, score, and
enter the data?
Using MBSP for math, teachers can
administer probes to whole class.
 DORF probes must be administered oneon-one, and creativity pays off (train and
use art, music, library, etc. specialists).
 Schedule for progress monitoring math
and reading every-other week.

Week 1
Reading
1st
Reading
X
X
X
X
X
Math
X
X
4th
5th
Math
X
2nd
3rd
Week 2
X
X
Who will monitor student
growth, when, and how often?



Best Practices in Data-Analysis Teaming
(Kovaleski & Pedersen, 2008)
Chambersburg Area School District Elementary
Response to Intervention Manual (McCrea et.
al., 2008)
Derry Township School District Response to
Intervention Model
(http://www.hershey.k12.pa.us/56039310111408/lib/56039310111408/_files/Microsoft_Word__Response_to_Intervention_Overview_of_Hershey_Elementary_Model.pdf)
What changes should be made
to instruction & intervention?
Ensure treatment fidelity!!!!!!!!
 Increase instructional time (active and
engaged)
 Decrease group size
 Gather additional, diagnostic, information
 Change the intervention

Final Exam…
Student Data: 27, 29, 26, 34, 27, 32, 39,
45, 43, 49, 51, --, --, 56, 51, 52, --, 57.
 Benchmark Data: BOY = 40, MOY = 68.
 What is student’s RoI?
 How does RoI compare to expected and
needed RoIs?
 What steps would your team take next?
 What if Benchmarks were 68 and 90
instead?

Questions? & Comments!
The RoI Web Site

http://sites.google.com/site/rateofimprovement/


Caitlin Flinn


[email protected]
Andy McCrea


Download powerpoints, handouts, Excel graphs,
charts, articles, etc.
[email protected]
Matt Ferchalk
[email protected]
Resources

www.interventioncentral.com

www.aimsweb.com

http://dibels.uoregon.edu

www.nasponline.org
Resources
www.fcrr.org
Florida Center for Reading Research
 http://ies.ed.gov/ncee/wwc//
What Works Clearinghouse
 http://www.rti4success.org
National Center on RtI

References
Ardoin, S. P., & Christ, T. J. (2009). Curriculumbased measurement of oral reading: Standard
errors associated with progress monitoring
outcomes from DIBELS, AIMSweb, and an
experimental passage set. School Psychology
Review, 38(2), 266-283.
Ardoin, S. P. & Christ, T. J. (2008). Evaluating
curriculum-based measurement slope estimates
using triannual universal screenings. School
Psychology Review, 37(1), 109-125.
References
Christ, T. J. (2006). Short-term estimates of
growth using curriculum-based measurement
of oral reading fluency: Estimating standard
error of the slope to construct confidence
intervals. School Psychology Review, 35(1),
128-133.
Deno, S. L. (1985). Curriculum-based
measurement: The emerging alternative.
Exceptional Children, 52, 219-232.
References
Deno, S. L., Fuchs, L.S., Marston, D., &
Shin, J. (2001). Using curriculum-based
measurement to establish growth
standards for students with learning
disabilities. School Psychology Review,
30, 507-524.
Flinn, C. S. (2008). Graphing rate of
improvement for individual students.
InSight, 28(3), 10-12.
References
Fuchs, L. S., & Fuchs, D. (1998). Treatment
validity: A unifying concept for reconceptualizing
the identification of learning disabilities. Learning
Disabilities Research and Practice, 13, 204-219.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., &
Germann, G. (1993). Formative evaluation of
academic progress: How much growth can we
expect? School Psychology Review, 22, 27-48.
References
Gall, M.D., & Gall, J.P. (2007). Educational
research: An introduction (8th ed.). New
York: Pearson.
Jenkins, J. R., Graff, J. J., & Miglioretti, D.L.
(2009). Estimating reading growth using
intermittent CBM progress monitoring.
Exceptional Children, 75, 151-163.
References
Karwowski, W. (2006). International
encyclopedia of ergonomics and human
factors. Boca Raton, FL: Taylor & Francis
Group, LLC.
Shapiro, E. S. (2008). Best practices in setting
progress monitoring goals for academic skill
improvement. In A. Thomas and J. Grimes
(Eds.), Best practices in school psychology V
(Vol. 2, pp. 141-157). Bethesda, MD: National
Association of School Psychologists.
References
Vogel, D. R., Dickson, G. W., & Lehman, J.
A. (1990). Persuasion and the role of
visual presentation support. The UM/3M
study. In M. Antonoff (Ed.), Presentations
that persuade. Personal Computing, 14.
References





Burns, M. (2008, October). Data-based problem analysis and interventions within
RTI: Isn’t that what school psychology is all about? Paper presented at the
Association of School Psychologists of Pennsylvania Annual Conference, State
College, PA.
Ferchalk, M. R., Richardson, F. & Cogan-Ferchalk, J.R. (2010, October). Using oral
reading fluency data to create an accurate prediction model for PSSA Performance.
Poster session presented at the Association of School Psychologists of Pennsylvania
Annual Conference, State College, PA.
Hintze, J., & Silberglitt, B. (2005). A Longitudinal Examination of the Diagnostic
Accuracy and Predictive Validity of R-CBM and High-Stakes Testing. School
Psychology Review, 34(3), 372-386.
McGlinchey, M., & Hixson, M. (2004). Using Curriculum-Based Measurement to
Predict Performance on State Assessments in Reading. School Psychology Review,
33(2), 193-203.
Shapiro, E., Keller, M., Lutz, J., Santoro, L., & Hintze, J. (2006). Curriculum-Based
Measures and Performance on State Assessment and Standardized Tests: Reading
and Math Performance in Pennsylvania. Journal of Psychoeducational Assessment,
24(1), 19-35.
References




Silberglitt, B. (2008). Best practices in Using Technology for DataBased Decision Making. In A. Thomas and J. Grimes (eds.) Best
practices in school psychology V. Bethesda, MD: National
Association of School Psychologists.
Silberglitt, B., Burns, M., Madyun, N., & Lail, K. (2006). Relationship
of reading fluency assessment data with state accountability test
scores: A longitudinal comparison of grade levels. Psychology in the
Schools, 43(5), 527-535.
Stage, S., & Jacobsen, M. (2001). Predicting Student Success on a
State-mandated Performance-based Assessment Using Oral
Reading Fluency. School Psychology Review, 30(3), 407.
Stewart, L.H. & Silberglitt, B. (2008). Best practices in Developing
Academic Local Norms. In A. Thomas and J. Grimes (eds.) Best
practices in school psychology V. Bethesda, MD: National
Association of School Psychologists.