Additional Resources Research has demonstrated that students just - - PDF document

additional resources
SMART_READER_LITE
LIVE PREVIEW

Additional Resources Research has demonstrated that students just - - PDF document

Additional Resources Research has demonstrated that students just below the proficiency bar were most likely to make large gains in the NCLB era, while high achievers made lesser gains. Those most victimized by this regime were high- achieving


slide-1
SLIDE 1
slide-2
SLIDE 2

Additional Resources

Research has demonstrated that students just below the proficiency bar were most likely to make large gains in the NCLB era, while high achievers made lesser gains. Those most victimized by this regime were high- achieving poor and minority students—kids who were dependent on the school system to cultivate their potential and accelerate their achievement. Here are some of the studies that have focused on this issue:

Jonathan Plucker, Jennifer Giancola, Grace Healey, Daniel Arndt, and Chen Wang, Equal talents, unequal opportunities: A report card on state support for academically talented low-income students (Lansdowne, VA: Jack Kent Cooke Foundation, 2015), http://www.excellencegap.org/state-report. Jonathan Plucker, Jacob Hardesty, and Nathan Burroughs, Talent on the sidelines: Excellence gaps and America’s persistent talent underclass (Storrs, CT: University of Connecticut, Center for Education Policy Analysis, 2013), http://cepa.uconn.edu/mindthegap. Robert Theaker, Yun Xiang, Michael Dahlin, John Cronin, Sarah Durant, Do High Flyers Maintain Their Altitude? Performance Trends of Top Students (Washington, D.C.: Thomas B. Fordham Institute, 2011), http://edexcellence.net/publications/high-flyers.html. Dale Ballou and Matthew G. Springer, Achievement Trade-Offs and No Child Left Behind (Nashville, TN: Vanderbilt University, 2008), http://www.vanderbilt.edu/schoolchoice/documents/achievement_tradeoffs.pdf. Tom Loveless, Steven Farkas, and Ann Duffet, High-Achieving Students in the Era of NCLB, (Washington, D.C.: Thomas

  • B. Fordham Institute, 2008), http://edex.s3-us-west-2.amazonaws.com/publication/pdfs/20080618_high_achievers_7.pdf.

Joshua S. Wyner, John M. Bridgeland, John J. DiIulio, Jr., Achievement Trap: How America Is Failing Millions of High- Achieving Students from Lower-Income Families (Washington, D.C.: Jack Kent Cooke Foundation, 2006), http://www.issuelab.org/resource/achievement_trap_how_america_is_failing_millions_of_high- achieving_students_from_lowerincome_families Jennifer Booher-Jennings, Below the Bubble: “Educational Triage” and the Texas Accountability System (New York, NY: Columbia University, 2005), http://aer.sagepub.com/content/42/2/231.short.

slide-3
SLIDE 3

Recommended Indicators and Weights

Indicator K-8 High School Achievement index 0-25 percent 0-25 percent Growth for all students ≥ 50 percent ≥ 25 percent Growth for subgroups 10-20 percent 10-20 percent ELL progress Variable Variable Graduation N/A ≤ 25 percent School quality 10-15 percent 15-20 percent

slide-4
SLIDE 4

Other indicators that focus on high-achievers

AP/IB achievement (e.g., number of students scoring 3 or higher on AP or 4 or higher on IB) Dual enrollment credits w/ quality control (i.e., credits accepted by state universities) Performance on college entry exams such as SAT, ACT, and ACCUPLACER Performance in SAT subject tests

How should we use accountability to promote equity?

X Proficiency rates X Growth-to-proficiency models X Base most of schools’ grades on growth for low-achievers

Base 10-20 percent of a school’s rating on growth for students who are low-performing relative to an external benchmark (e.g., below proficient students) and/or students who belong to a traditionally underserved subgroup

Slow Progress

  • 15 of the 17 states that submitted ESSA plans indicated that they will assign summative

ratings to schools.

  • On average, these 15 states plan to base 56 percent of a school’s rating on either a

performance index or growth for all students – both measures that give schools an incentive to pay attention to all of their students.

  • In 2016, fourteen states (or 28 percent) used achievement indexes or average scale scores

instead of proficiency rates. So far, seven of the 17 states that submitted ESSA plans (or 41 percent) plan to use one of these measures as their indicator of academic achievement.

  • In 2016, the average state based just 30 percent of its K-8 schools’ ratings on growth for

all students. On average, the 15 states that have submitted ESSA plans so far plan to base 36 percent of their K-8 school ratings on growth for all students.

slide-5
SLIDE 5

How states should redesign their accountability systems under ESSA

By David Griffith and Mike Petrilli – November 10, 2016 We at the Fordham Institute have long held that there is no one best way to design a state accountability system. Still, we know that states are now putting pen to paper on their accountability plans and that many of them want advice about what to do. So here is our attempt to outline an ideal accountability system for states. Consistent with ESSA, our proposed accountability system rates schools based on a range of indicators. Indicators of Academic Achievement (10-25 percent of summative school ratings) In the average state, measures of academic achievement currently count for about half of schools’ summative

  • ratings. However, because these measures are strongly correlated with student demographics and prior

achievement, we believe they should count for at most a quarter of schools’ ratings going forward. Further, instead of using raw proficiency rates, which encourage schools to focus on the “bubble kids” (i.e., those just above or below the proficiency threshold) states should use average test scores or a “performance index.” Indicators of Student Growth (50-90 percent of K-8 school ratings, 40-80 percent of high school ratings) Because they are the best indicators of schools’ overall performance, measures that capture the academic growth of all students should count for at least half of elementary and middle schools’ ratings and at least 40 percent of high schools’ ratings. Currently, forty-five states estimate growth in English Language Arts and math at the K-8 level, and thirty-five do so in high school. Thus, assigning more weight to these measures is something most states could do right away. In the medium term, states should also continue to develop their capacity to estimate growth at the high school level, as well as in other core subjects such as science and social studies, which will allow them to weight growth even more heavily in the future. Because growth scores can be unstable, we encourage states to average over two years when calculating school grades. We also strongly urge states not to use “growth to proficiency” measures, as these encourage schools to ignore the needs of their high- achievers (and are poor indicators of school quality). Similarly, we urge those states that base a portion of their grade on the progress of low-achieving students or other subgroups not to overdo it. In our view, at least three quarters of the weight states assign to growth should be based on growth for all students. Indicators of Progress toward English Language Proficiency (Variable) We will leave the debates over how to best serve English Language Learners to those with expertise in this area. However, common sense suggests that the weight assigned to ELL measures should vary based on the percentage of a school’s students who are classified as ELL. High School Graduation (10-25 percent) Though we don’t have strong opinions about how states measure graduation, it’s important that they don’t assign too much weight to this indicator, lest they encourage schools to lower their standards for earning a

  • diploma. In our view, basing 10-25 percent of high schools’ ratings on some combination of 4- and 5-year

graduation rates is a reasonable approach. Indicators of Student Success or School Quality (10-20 percent) There is broad agreement that states’ current accountability systems are unfortunately dependent on standardized tests that cannot capture all the skills that students need to acquire, and that have sometimes encouraged teachers to engage in harmful “test prep.” Yet many of the alternatives to testing that have been

slide-6
SLIDE 6

proposed, while promising in theory, are problematic in practice. Consequently, though we support the goal of reducing the emphasis on testing, we encourage states to be deliberate in their approach. Over time, the weight assigned to these indicators may grow beyond the parameters we specify here, but first we need to figure out what works. For now, here are five ideas we believe states should consider: “College and Career Ready” indicators: Many states already include AP, IB, ACT, and SAT achievement in their high school rating systems, and we heartily endorse all of these of these measures, especially those tied to achievement on AP/IB tests, which are precisely the sort of high-quality assessments that critics of dumbed- down standardized tests have long called for. Likewise, we support dual enrollment-based measures, provided there is some form of quality control (e.g., provided that the credits students earn are accepted by state universities). Finally, we endorse indicators that are tied to industry credentials or certificates, which can be useful to students who are entering the job market directly out of high school. New Mexico, which already includes more than a dozen “college and career readiness” indicators in its high school accountability system, is a good example of what is possible in this area. Subsequent performance/persistence: How students fare after they leave a school says a lot about what they learned while they were enrolled, and the degree to which that learning was accurately reflected in their test scores – or not. To guard against illusory achievement gains, states should rate elementary and middle schools based on the on-time promotion rate of students in the next two grades after they leave a school (as Morgan Polikoff of the University of Southern California recommends). Similarly, they should rate high schools based

  • n postsecondary remediation and/or completion rates, which are preferable to enrollment rates.

Student/teacher retention: In a choice-based system, student retention is an important indicator of school quality that disincentivizes “creaming.” Similarly, teacher retention is an important indicator of teacher satisfaction that is strongly correlated with student growth. Because they are essentially immune to gaming, both of these ideas deserve more attention than they have received to date. In neither case is 100 percent retention the goal, but very low retention rates are surely a sign that something is amiss. Chronic Absenteeism: Because the link between attendance and students’ long-term success is so clear (and because most states already collect attendance data), chronic absenteeism is an obvious candidate for the “school quality” indicator. States might also consider including chronic absenteeism for teachers. Student surveys: Many teacher evaluation systems already incorporate the results of student surveys, which research suggests can also predict school and principal value-added. Unlike teacher surveys, which are easily gamed, student surveys are a potentially useful addition to existing evaluation systems, provided that states take sensible steps to ensure the integrity of the results. Obviously, none of these measures is perfect. In particular, because schools that serve difficult populations are likely to have higher student/teacher turnover, higher remediation rates, and lower attendance, these measures are likely to be biased if the goal of the system is to gauge school performance fairly. In light of this concern, depending on their goals, states may wish to adjust schools’ scores on these indicators by controlling for demographics, geography, and other factors, much as they already do when estimating student growth. *** We are in one of those rare moments in the education world when real change is not only possible but likely. Beneath the hideous clamor of the 24-hour news cycle, the quiet murmur of state boards of education and gubernatorial subcommittees speaks to survival of a different America, in which Democrats and Republicans can still come together to do what’s best for kids. Let’s not let them down.

slide-7
SLIDE 7

Growth plus proficiency? Why states are turning to a hybrid strategy for judging schools (and why some experts say they shouldn’t)

By Matt Barnum - June 22, 2017 A compromise in a long-running debate over how to evaluate schools is gaining traction as states rewrite their accountability systems. But experts say it could come with familiar drawbacks — especially in fairly accounting for the challenges poor students face. Under No Child Left Behind, schools were judged by the share of students deemed proficient in math and

  • reading. The new federal education law, ESSA, gives states new flexibility to consider students’ academic

growth, too. This is an approach that some advocates and researchers have long pushed for, saying that is a better way to judge schools that serve students who start far below proficiency. But some states are proposing measuring academic growth through a hybrid approach that combines both growth and proficiency. (That’s in addition to using proficiency metrics where they are required.) A Chalkbeat review of ESSA plans found that a number of places plan to use a hybrid metric to help decide which of their schools are struggling the most, including Arizona, Connecticut, Delaware, Louisiana, Massachusetts, and Washington D.C. The idea has a high-profile supporter: The Education Trust, a civil rights and education group now headed by former U.S. Education Secretary John King. But a number of researchers say the approach risks unfairly penalizing high-poverty schools and maintaining some of the widely perceived flaws of No Child Left Behind. These questions have emerged because ESSA, the new federal education law, requires states to use academic and other measures to identify 5 percent of their schools as struggling. States have the option to include “academic progress” in their accountability systems, and many are doing so. This is a welcome trend, says Andrew Ho of Harvard, who has written a book on the different ways to measure student progress. Systems that use proficiency percentages alone, rather than accounting for growth, “are a disaster both for measurement and for usefulness,” Ho said. “They are extremely coarse and dangerously misleading.” Under a growth-to-proficiency model, Student A would be considered on track to proficiency by grade 6 based

  • n the growth from grades 3 to 4, but students B and C would not. (Image: Ho’s “A Practitioner’s Guide to

Growth Models”) States that propose using this hybrid measure — commonly called “growth to proficiency” or “growth to standard” — have offered varying degrees of specificity in their plans about how they will calculate it. The basic idea is to measure whether students will meet or maintain proficiency within a set period of time, assuming they continue to grow at the same rate. Schools are credited for students deemed on track to meet the standard in the not-too-distant future, even if the students aren’t there yet. This tends to rewards schools that serve students who are already near, at, or above the proficiency standard, meaning that schools with a large number of students in poverty will likely get lower scores on average. It also worries researchers wary of re-creating systems that incentivize schools to focus on students near the proficiency bar, as opposed to those far below or above it. That phenomenon has been observed in some research on accountability systems focused on proficiency.

slide-8
SLIDE 8

“As an accountability metric, growth-to-proficiency is a terrible idea for the same reason that achievement-level metrics are a bad idea — it is just about poverty,” said Cory Koedel, an economist at the University of Missouri who has studied school accountability. He has argued that policymakers should try to ensure ratings are not correlated with measures of poverty. Researchers tend to say that the strongest basis for sorting out the best and worst schools (at least as measured by test scores) is to rely on sophisticated value-added calculations. Those models control for where students start, as well as demographic factors like poverty. “If there are going to be high stakes — and I don’t suggest that there should be — then the more technically rigorous value-added models become the best way to approach teacher- and school-level accountability,” said Ho. A large share of states are planning to use a value-added measure or similar approach as part of their accountability systems, in several cases alongside the growth-to-proficiency measure. Some research has found that these complex statistical models can be an accurate gauge of how teachers and schools affect students’ test scores, though it remains the subject of significant academic debate. But The Education Trust, which has long backed test-based accountability, is skeptical of these growth models, saying that they water down expectations for disadvantaged students and don’t measure whether students will eventually reach proficiency. “Comparisons to peers won’t reveal whether that student will one day meet grade-level standards,” the group’s Midwest chapter stated in a report on Michigan’s ESSA state plan. “This risks setting lower expectations for students of color and low-income students, and does not incentivize schools to accelerate learning for historically underserved student groups.” In an email Natasha Ushomirsky, EdTrust’s policy director, said the group supports measures like growth to proficiency over value-added models “because a) they do a better job of communicating expectations for raising student achievement, and b) they can be used to understand whether schools are accelerating learning for historically underserved students, and prompt them to do so.” Of the value-added approach, Ushomirsky said, “A lower-scoring student is likely to be compared only to other lower-scoring students, while a higher-scoring student is compared to other higher-scoring students. This means that the same … score may represent very different amounts of progress for these two students.” Marty West, a professor at Harvard, says the most prudent approach is to report proficiency data transparently, but to use value-added growth to identify struggling schools for accountability purposes. “There are just too many unintended consequences from using [proficiency] or any hybrid approach as the basis of your performance evaluation system,” he said. “The most obvious is making educators less interested in teaching in [high-poverty] schools because they know they have an uphill battle with respect to any accountability rating — and that’s the last thing we want.”

slide-9
SLIDE 9

Letter from Morgan Polikoff to Secretary King

July 22, 2016 The Honorable John King Secretary of the Education Department 400 Maryland Avenue, SW Washington, D.C. 20202 Dear Mr. Secretary: The Every Student Succeeds Act (ESSA) marks a great opportunity for states to advance accountability systems beyond those from the No Child Left Behind (NCLB) era. The Act (Section 1111(c)(4)(B)(i)(I)) requires states to use an indicator

  • f academic achievement that “measures proficiency on the statewide assessments in reading/language arts and

mathematics.” The proposed rulemaking (§ 200.14) would clarify this statutory provision to say that the academic achievement indicator must “equally measure grade-level proficiency on the reading/language arts and mathematics assessments.” We write this letter to argue that the Department of Education should not mandate the use of proficiency rates as a metric of school performance under ESSA. That is, states should not be limited to measuring academic achievement using performance metrics that focus only on the proportion of students who are grade-level proficient—rather, they should be encouraged, or at a minimum allowed, to use performance metrics that account for student achievement at all levels, provided the state defines what performance level represents grade level proficiency on its reading/language arts and mathematics assessments. Moving beyond proficiency rates as the sole or primary measure of school performance has many advantages. For example, a narrow focus on proficiency rates incentivizes schools to focus on those students near the proficiency cut score, while an approach that takes into account all levels of performance incentivizes a focus on all students. Furthermore, measuring performance using the full range of achievement provides additional and useful information for parents, practitioners, researchers, and policymakers for the purposes of decisionmaking and accountability, including more accurate information about the differences among schools. Reporting performance in terms of the percentage above proficient is problematic in several important ways. Percent proficient:

  • 1. Incentivizes schools to focus only on students around the proficiency cutoff rather than all students in a school

(Booher-Jennings, 2005; Neal & Schanzenbach, 2010). This can divert resources from students who are at lower

  • r higher points in the achievement distribution, some of whom may need as much or more support than students

just around the proficiency cut score (Schwartz, Hamilton, Stecher, & Steele, 2011). This has been shown to influence which students in a state benefit (i.e., experience gains in their academic achievement) from accountability regulations (Neal & Schanzenbach, 2010).

  • 2. Encourages teachers to focus on bringing students to a minimum level of proficiency rather than continuing to

advance student learning to higher levels of performance beyond proficiency.

  • 3. Is not a reliable measure of school performance. For example, percent proficient is an inappropriate measure of

progress over time because changes in proficiency rates are unstable and measured with error (Ho, 2008; Linn, 2003; Kane & Staiger, 2002). The percent proficient is also dependent upon the state-determined cut score for proficiency on annual assessments (Ho, 2008), which varies from state to state and over time. Percent proficient further depends on details of the testing program that shouldn’t matter, such as the composition of the items on the state test or the type of method used to set performance standards. These problems are compounded in small schools or in subgroups that are small in size.

  • 4. Is a very poor measure of performance gaps between subgroups, because percent proficient will be affected by

how a proficiency cut score on the state assessments is chosen (Ho, 2008; Holland, 2002). Indeed, prior research suggests that using percent proficient can even reverse the sign of changes in achievement gaps over time relative to if a more accurate method is used (Linn, 2007).

slide-10
SLIDE 10
  • 5. Penalizes schools that serve larger proportions of low-achieving students (Kober & Riddle, 2012) as schools are

not given credit for improvements in performance other than the move to proficiency from not-proficient. We suggest two practices for measuring achievement that lessen or avoid these problems. Importantly, some of these practices were utilized by states in ESEA Flexibility Waivers and are improvements to NCLB practices (Polikoff, McEachin, Wrabel, & Duque, 2014). Average Scale Scores The best approach for measuring student achievement levels for accountability purposes under ESSA is to use average scale scores. Rather than presenting performance as the proportion of students who have met the minimum-proficiency cut score, states could present the average (mean) score of students within the school and the average performance of each subgroup of students. If the Department believes percent proficient is also important for reporting purposes, these values could be reported alongside the average scale scores. The use of mean scores places the focus on improving the academic achievement of all students within a school and not just those whose performance is near the state proficiency cut score (Center for Education Policy, 2011). Such a practice also increases the amount of variation in school performance measures each year, providing for improved differentiation between schools that may have otherwise similar proficiency rates. In fact Ho (2008) argues that if a single rating is going to be used for reporting on performance, it should be a measure of the average performance because such measures incorporate the value of every score (student) into the calculation and the average can be used for more advanced

  • analyses. The measurement of gaps between key demographic groups of students, a key goal of ESSA, is dramatically

improved with the use of average scores rather than the proportion of proficient students (Holland, 2002; Linn, 2007). Proficiency Indexes If average scale scores cannot be used, a weaker alternative that is still superior to percent proficient would be to allow states to use proficiency indexes. Schools under this policy would be allocated points based on multiple levels of

  • performance. For example, a state could identify four levels of performance on annual assessments: Well Below

Proficient, Below Proficient, Proficient, and Advanced Proficient. Schools receive no credit for students Well Below Proficient, partial credit for students who are Below Proficient, full credit for students reaching Proficiency, and additional credit for students reaching Advanced Proficiency. Here we present an example using School A and School B.

Proficiency Index Example School A School B Proficiency Category (A) Points Per Student (B) # of Students (C) Index Points (A) Points Per Student (B) # of Students

(C) Index Points

Well Below Proficient 0.0 27 0.0 0.0 18

0.0

Below Proficient 0.5 18 9.0 0.5 27

13.5

Proficient 1.0 33 33.0 1.0 26

26.0

Advanced Proficient 1.5 22 33.0 1.5 29

43.5

Total 100 75.0 100

83.0

NCLB Proficiency Rate: 55% ESSA Proficiency Index: 75 NCLB Proficiency Rate: 55% ESSA Proficiency Index: 83

Under NCLB proficiency rate regulations, both School A and School B would have received a 55% proficiency rate score. Using a Proficiency Index, the performance of these schools would no longer be identical. A state would be able to compare the two schools while simultaneously identifying annual meaningful differentiation in the performance of School A from that of School B. The hypothetical case presented here is not the only way a proficiency index can be used. Massachusetts is one example of a state that has used a proficiency index for the purposes of identifying low-performing schools and gaps between subgroup of students (see: ESEA Flexibility Request: Massachusetts, page 32). These indexes are understandable for practitioners, family members, and administrators while also providing additional information regarding the performance of students who are not grade-level proficient.

slide-11
SLIDE 11

The benefits of using such an index, relative to using the proportion of proficient students in a school, is that it incentivizes a focus on all students, not just those around an assessment’s proficiency cut score (Linn, Baker, & Betebenner, 2002). Moreover, schools with large proportions of students well-below the proficiency cut score are given credit for moving students to higher levels of performance even if still below the cut score (Linn, 2003). The use of a proficiency index or providing schools credit for students at different points in the achievement distribution improves the construct validity of the accountability measures over the NCLB proficiency rate measures (Polikoff et al., 2014). In other words, the inferences made about schools (e.g., low-performing or bottom 5%) using the proposed measures are more appropriate than those made using proficiency rates alone. What We Recommend Given the findings cited above, we believe the Department should revise its regulations to one of two positions:

  • Explicitly endorsing or encouraging states to use one of the two above-mentioned approaches as an alternative to

proficiency rates as the primary measure of school performance. Average scale scores is the superior method.

  • Failing that, clarifying that the law is neutral about the use of proficiency rates versus one of the two above-

mentioned alternatives to proficiency rates as the primary measure of school performance. With the preponderance of evidence showing that schools and teachers respond to incentives embedded in accountability systems, we believe option 1 is the best choice. This option leaves states the authority to determine school performance how they see fit but encourages them to incorporate what we have learned through research about the most accurate and appropriate way to measure school performance levels. Our Recommendation is Consistent with ESSA Section 1111(c)(4)(A)) of ESEA, as amended by ESSA, requires each state to establish long-term goals: “(i) for all students and separately for each sub- group of students in the State— (I) for, at a minimum, improved— (aa) academic achievement, as measured by proficiency on the annual assessments required under subsection (b)(2)(B)(v)(I);” And Section 1111(c)(4)(B) of ESEA requires the State accountability system to have indicators that are used to differentiate all public schools in the State, including—(i) “academic achievement—(I) as measured by proficiency on the annual assessments required [under other provisions of ESSA].” Our suggested approach is supportable under these provisions based on the following analysis. The above-quoted provisions in the law that mandate long-term goals and indictors of student achievement based on proficiency on annual assessments do not prescribe how a state specifically uses the concept of proficient performance on the state assessments. The statute does not prescribe that “proficiency” be interpreted to compel differentiation of schools based exclusively on “proficiency rates.” Proficiency is commonly taken to mean “knowledge” or “skill” (Merriam Webster defines it as “advancement in knowledge or skill” or “the quality or state of being proficient”, where “proficient” is defined as “well advanced in an art, occupation, or branch of knowledge”). Under either definition, an aggregate performance measure such as the two options described above would clearly qualify as involving a measure of proficiency. Both of the above- mentioned options provide more information about the average proficiency level of a school than an aggregate proficiency

  • rate. Moreover, they address more effectively than proficiency rates the core purposes of ESSA, including incentivizing

efforts to educate all children and providing broad discretion to states in designing their accountability systems. We would be happy to provide more information on these recommendations at your pleasure. Sincerely, Morgan Polikoff, Ph.D., Associate Professor of Education, USC Rossier School of Education

slide-12
SLIDE 12

Signatories

Alice Huguet, Ph.D., Postdoctoral Fellow, School of Education and Social Policy, Northwestern University Andrew Ho, Ph.D., Professor of Education, Harvard Graduate School of Education Andrew Saultz, Ph.D., Assistant Professor, Miami University (Ohio) Andrew Schaper, Ph.D., Senior Associate, Basis Policy Research Anna Egalite, Ph.D., Assistant Professor of Education, North Carolina State University Arie van der Ploeg, Ph.D., retired Principal Researcher, American Institutes for Research Cara Jackson, Ph.D., Assistant Director of Research & Evaluation, Urban Teachers Christopher A. Candelaria, Ph.D., Assistant Professor of Public Policy and Education, Vanderbilt University Cory Koedel, Ph.D., Associate Professor of Economics and Public Policy, University of Missouri Dan Goldhaber, Ph. D., Director, Center for Education Data & Research, University of Washington Bothell Danielle Dennis, Ph.D., Associate Professor of Literacy Studies, University of South Florida Daniel Koretz, Ph.D., Henry Lee Shattuck Professor of Education, Harvard Graduate School of Education David Hersh, Ph.D. Candidate, Rutgers University Bloustein School of Planning and Public Policy David M. Rochman, Research and Program Analyst, Moose Analytics Edward J. Fuller, Ph.D., Associate Professor of Education Policy, The Pennsylvania State University Eric A. Houck, Associate Professor of Educational Leadership and Policy, University of North Carolina at Chapel Hill Eric Parsons, Ph.D., Assistant Research Professor, University of Missouri Erin O’Hara, former Assistant Commissioner for Data & Research, Tennessee Department of Education Ethan Hutt, Ph.D., Assistant Professor of Education, University of Maryland College Park Eva Baker, Ed.D., Distinguished Research Professor, UCLA Graduate School of Education and Information Studies, Director, Center for Research on Evaluation, Standards, and Student Testing, Past President, American Educational Research Association Greg Palardy, Ph.D., Associate Professor, University of California, Riverside Heather J. Hough, Ph.D., Executive Director, CORE-PACE Research Partnership Jason A. Grissom, Ph.D., Associate Professor of Public Policy and Education, Vanderbilt University Jeffrey Nellhaus, Ed.M., Chief of Assessment, Parcc Inc., former Deputy Commissioner, Massachusetts Department of Elementary and Secondary Education Jeffrey W. Snyder, Ph.D., Assistant Professor, Cleveland State University Jennifer Vranek, Founding Partner, Education First John A. Epstein, Ed.D., Education Associate Mathematics, Delaware Department of Education John Q. Easton, Ph.D., Vice President, Programs, Spencer Foundation, former Director, Institute of Education Sciences John Ritzler, Ph.D., Executive Director, Research & Evaluation Services, South Bend Community School Corporation Jonathan Plucker, Ph.D., Julian C. Stanley Professor of Talent Development, Johns Hopkins University Joshua Cowen, Ph.D., Associate Professor of Education Policy, Michigan State University Katherine Glenn-Applegate, Ph.D., Assistant Professor of Education, Ohio Wesleyan University Linda Darling-Hammond, Ed.D., President, Learning Policy Institute, Charles E. Ducommun Professor of Education Emeritus, Stanford University, Past President, American Educational Research Association Lindsay Bell Weixler, Ph.D., Senior Research Fellow, Education Research Alliance for New Orleans Madeline Mavrogordato, Ph.D., Assistant Professor, K-12 Educational Administration, Michigan State University Martin R. West, Ph.D., Associate Professor, Harvard Graduate School of Education Matt Chingos, Ph.D., Senior Fellow, Urban Institute Matthew Di Carlo, Ph.D., Senior Fellow, Albert Shanker Institute Matthew Duque, Ph.D., Data Strategist, Baltimore County Public Schools Matthew A. Kraft, Ed.D., Assistant Professor of Education and Economics, Brown University Michael H. Little, Royster Fellow and Doctoral Student, University of North Carolina at Chapel Hill Michael Hansen, Ph.D., Senior Fellow and Director, Brown Center on Education Policy, Brookings Institution Michael J. Petrilli, President, Thomas B. Fordham Institute Nathan Trenholm, Director of Accountability and Research, Clark County (NV) School District Tiên Lê, Doctoral Fellow, USC Rossier School of Education Raegen T. Miller, Ed.D., Research Fellow, Georgetown University Russell Brown, Ph.D., Chief Accountability Officer, Baltimore County Public Schools Russell Clement, Ph.D., Research Specialist, Broward County Public Schools Sarah Reckhow, Ph.D., Assistant Professor of Political Science, Michigan State University Sean P. “Jack” Buckley, Ph.D., Senior Vice President, Research, The College Board, former Commissioner of NCES Sherman Dorn, Ph.D., Professor, Mary Lou Fulton Teachers College, Arizona State University Stephani L. Wrabel, Ph.D., USC Rossier School of Education Thomas Toch, Georgetown University Tom Loveless, Ph.D., Non-resident Senior Fellow, Brookings Institution